Re: [cryptography] [FORGED] Re: Kernel space vs userspace RNG

2016-05-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Sadly, people's prejudices get them overcomplicating the issue.

It's certainly true that a geiger counter measures something that's truly 
random (for some suitable value of truly random) because of quantum effects. 
But so does a noisy diode or resistor noise. The difference is that radioactive 
decay is sexy because you have to get exotic and dangerous material, but a 
resistor is just carbon, and so people are quite sure that it doesn't actually 
have atoms or let alone quanta or quarks in it. Quanta are exotic. It's not 
like they make quantum computers out of atoms, right?

Similarly, the lava lamp is cool, but you get just as good (and often better) 
real randomness out of the same camera pointed at a lava lamp, but with the 
lens cap on. That's because the sensor gets quantum crap in it caused by many 
things (from similar noise to the above to virtual particles) but with light 
coming in, the image washes out the quantum crap. But it doesn't *feel* random 
to take readings from a camera with a lens cap on.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVzwPwfD9H+HfsTZWAQjO3wf7Bzm4yrerdi4Y0FPh90dCRpLLFo8/t2Va
bcWAxLp/ogpVc5yqfMyJ9UIyF8KBJzhPuY9nUA/yzG1k9xTdAEuw5H2jP8Azfdd3
dxthTh+OVx/GdFjtv9lxbG3YT7JQgO7nwTKZ73n9samQ/sf+HfGmrqwnS5w5Wv6H
3Wb3W0pM6gGHQzq+SJc6zEO8cFPEwCx84qV2E/wz6qFbMzJ6HrN/CF5T4G5wGOQx
t1eXozrKY2h9MsKJFTGoxLRgpRUgnAU/kZvW8sGkxLkonsyI5yHqYUNmAvEh3WCl
IBXmEt/WndnbyFSrUzVGcNxwNrseJwHriWw5u7FqeFTHOvTQjLPD8Q==
=4c1p
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Here's an optimization:

* Assume you have a decent One Time Pad generator.

* Assume you have a secure pad delivery system.

* Assume it is reasonably low-latency and high-volume. Say somewhere between 
Usenet and the modern Internet.

Now then -- instead of sending the pads, send a message. It gets delivered with 
the same security as the pad, so it has identical security as using the OTP. 
Even better, you don't have to worry about insecurity of the OTP generator.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVQ8kVvD9H+HfsTZWAQglHAgApSI+gBHAzenSwtoE64g+TRb17tEbD3Vq
dSjtzFlp+j4k4DqoMTXCzmG0xmvVunZqsKFpxActAA6ztbN5gKX1xnOmFDH/dn8z
s5rw8RJNteIxRitTtb8+01yJiR4lzuJuQPcGX+ag6pF1GFOhNWf4sYLDVL0ya61u
wXe4Ykz1E+S2zPDmqAnTvJaBgc+wWvTSe2CT+6T7hOfFf0eCn/h21Js+8vFfdhiJ
K0aOzJH4aFdNuPGqKN48GKmFOvdnbrfZ0v9Y9zk1tnoM1YszX/HXXTxsOKSr4mzX
V3u52AH4viqrR0KbFQ/7aU7pR7lIQtML2fgoWDLQhnr3DJ7Vrn152w==
=1PVt
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PGP word list

2015-02-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

> 
> I just realised one barrier -- language.  It uses the English language, and 
> PGP might be stronger in Europe than in the anglo world.
> 
> 
> So perhaps the wordset should be retuned to being some form of 
> internationalised english, words that are recognisable by a wide set of 
> cultures?
> 
> Things like: weekend, manyana, angst, perestroika, bollywood, ...
> 
> just a thought.

We're using the PGP world list for verifying short authentication strings.

You're bringing up a great point, and it's one we're dealing with. 

Ultimately, the problem is that any given word is going to be unpronounceable 
gibberish to *someone* and you want that set of words and someones to be small 
enough.

The alternative is to use something like base32 and the ICAO/NATO word list 
(alpha, bravo, charlie, delta, echo, etc.) or even bare letters and numbers to 
get base32.

The PGP word list is a set of two-syllable and three-syllable words that are 
eight bits long, each. You can either alternate two-syllable and three-syllable 
words for error correction, or combine them. That gives you either eight or 
nine bits per word, versus five bits for ICAO.

At the end of the day, you're either taking a hit on intelligibility with bare 
letters and numbers, or using "English" words. You have to pick the way in 
which you want to have suck.

The advantage of the PGP word list is that you get a large number of bits per 
word, but the cost is a high chance of a word that's baffling to someone. ICAO 
words have fewer words, but at least there's only 32 of them. Bare letters have 
some of the worst of all of these -- they're easily misunderstood (which is why 
the ICAO list exists), and even more cross-language.

So pick your poison.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVOYF9PD9H+HfsTZWAQhIRwf8CHlbpHidIYNLE8MpXBRAPq9w1QMbC5ZF
m37Zcei8Cyg9+UbAxZGdn1yWPQ8uRprAbQ60LCP8LVo6KY5e+q8KrmOsFkl/eaQN
9DUgFNaigjQJojMgaB/92DvXZG5FGN6z7Fs1pBPpMmvlEtVWaD9mN2Ny06jzdmai
8JTdJuQv8UD37daB/5Uxeg0AL5ap5WIEzl/MQnzSNHIlQyFvELbfSh/R/sD8yqKB
dA1l2g/54kwPtuVld+RkGQ4NWqha/hi2uJc14v3LO2J+Ubocbcalb1BNkY4de0X9
MTd525ZQi5hTmOynlBNvWDfPGkf985Ubfcei4bEuTOlncdXVNLfQ1Q==
=ptz5
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On Jan 8, 2015, at 3:37 PM, John Levine  wrote:

> 
> Do the fake certs validate in web browsers?  


No, they do not validate.

If you go (went) to a Youtube, Vimeo, etc. site, URL, embedded whatever, you'd 
get the expected browser cert failure error.


> If so, who's giving them fake
> *.google.com certs?

I apologize for being a smartass on this, especially since the premise of your 
conditional is false. But I just can't resist; please take this with the humor 
I offer it with:

https://www.google.com/search?q=how+do+I+use+openssl+to+generate+a+certificate

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVK8ZcvD9H+HfsTZWAQhklQf+PFg6a0O6ap3ewKH4hLMz2vGaoDC3d+Ye
HN5LYlvjdQsHqYgizc9QFHdT0/y9ZdWcpS99heaUeYPaGsMxoEId+WfCMfpUj6UD
683KSegfPq+lGev3MHaX6t0Eq0j+VojFuBdRHQ3HyRrnuNgT8yxfs9jnpQS/2AKh
EBbuxS4hB5Ar8pwJdHTjgxjjqqLif0ouhL+GFsWUbAq6RsEIVowcoSNXqzgeRPkr
1b25hk2MlebkZssr7L6PGfNKr6cpDccUCjIdXBBMsG/C7ZLg5W0oqQCiirsOYOk6
Kt2gKL/cDDEezdcbSn9cFtklI35RLXJoM3Oty/iEVzXYuibaHcyqiQ==
=6PT0
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On Jan 6, 2015, at 5:22 PM, Seth  wrote:

> On Tue, 06 Jan 2015 14:37:37 -0800, Nathan Dorfman  wrote:
>> Gonna go out on a limb here and strongly suggest not trusting any
>> *.google.com certificate signed by these guys.
> 
> Has anyone on the list had success running the Tor Browser Bundle over a Gogo 
> in flight connection?

Pffft. A simple local VPN works just fine to get around their stuff. I'm very 
happy with the VPN I'm presently using. The clever person can figure out who it 
is from this email.

Well, I'll be. I am on a Gogo-enabled flight even as we squeak, and I just 
turned my VPN off to go get you one of their certs. They're letting me get to 
YouTube and Vimeo just fine now. I guess someone got some sense. It was pretty 
hamfisted and really just reminded me to turn on my VPN.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVK8UdfD9H+HfsTZWAQhWtQf/RHxdt7ulBG3SRl6jORFabc/tCmTMeP6U
cAHB2Ex0D7dFZLE2WalYKKMd2s+JGB6zmf/ZycryaCapfXii9SyZB0l/EJBMw0y6
zNfgGQJ1ZNCtx8trpkFV9huNEZ7ynC4nInPpb7aRccHWl4HkvPhNWqHqjlVF8YJi
5SyDQ3dOD4lxM/mwcbXYEme/dsHEs566/GVjzcFNdObI9E0Sf24h35fljxvdn8ox
Tz8110fqmyirPxs/APqlgLXMfeNgCDpc+jrDjCyGmT93D5jVDJ0OtzGg6AYLJkGT
nyFln9NfoScnpCcEXUZ1mCD1bGyIm6YCnIJLJWGRpVdpWo7eKgMEFg==
=FcUr
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On Jan 6, 2015, at 8:34 AM, shawn wilson  wrote:

> You can smartly limit resolution in squid - I don't trust this is what
> they were doing, but you could provide a better experience like this.

It is what they are doing. I am an unhappy (for many reasons) regular (for many 
other reasons) Gogo customer, and noticed pretty quickly when they started 
doing it. I looked at their certs and it's an awful-user-experience way of 
blocking videos, and I strongly suspect that the rotten user experience is the 
intent.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVK8RMfD9H+HfsTZWAQidnwf9EsXGOIyf1gUq7b2o92SFOdENxhmc0b3H
/7NTBm1beKwq6LA6nwxrl8zunfuxNRVKn9ZCfyCteE+2mpzafFrxHubBPbKcffRX
motiqHmNs6nYrVNNbZe7BCbb6ds23gFuwREe8wPVrCplWz9n65hm+pf7FBhDlVwr
OMsVcMt6yGffnYOZhv/apbRPEUwj+ltkI0RKybAwxnEFDORcKto/MOckClKcbC60
RSAxt7r/R5GOUpCddAPXAI5o9rz6Rd6RsGEgVccnjmYMg/uj0Eb8Ko31GR702uX0
VklDxdH8HCzfkNpgewx7oLktsW1FxTqPsHxfiZPyiEv1uN9pdit+SA==
=UzPn
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] State of the art in block ciphers?

2013-12-07 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Dec 5, 2013, at 12:13 AM, Matthew Orgass  wrote:

> 
>  I recently looked into this and Threefish seems to be the only block cipher 
> I could find that provides major advantages over AES.  The large block sizes 
> and tweak parameter make it a good fit for disk encryption. I don't know how 
> the performace compares to hardware AES.  I haven't so far come across any 
> good reason to start using any block cipher other than AES or Threefish 
> (unless special circumstances are involved).

Thanks. Full disclosure, I'm a Threefish/Skein coauthor.

Performance-wise, software anything is not going to compare against hardware 
anything-else. If you are so desperate for performance that you consider it 
more important than security, then there are lots of good answers for you. XOR, 
for example, is about as fast as you can get, and all you need is a good key 
generation algorithm. AES in hardware (like AES-NI) runs faster than one clock 
per byte if everything's set up correctly. If things aren't set up correctly, 
it runs at about twice software speed. 

In software, Threefish runs at about twice AES speed. There are many, many 
handwaves in my previous statement. Threefish was designed for a 64-bit, 
superscalar architecture with a good handful of registers. At the time, the 
exemplar of such an architecture was an Intel Core 2 CPU. It succeeds at using 
the whole of the CPU so well that Intel uses it internally as a literal burn-in 
test of a CPU. If you have a CPU with heat flow issues, running Threefish on it 
will find them. Often destructively. Notably, it's intentionally an ARX 
construction that uses register flow as a cheap way to get inter-round 
permutations. 

The wide-block is a huge advantage. You've noted its use in disk encryption, 
but it's also a great use just about anywhere else. Block ciphers in general 
have the advantage that they, um, well, encrypt in blocks. They didn't exist at 
all until relatively recently. (If you squint at it the right way, you can 
consider ADFGX to be an early block cipher, if not the earliest, but no doubt 
we can debate it). It's relatively easy to turn a block cipher into a stream 
cipher (counter mode), but relatively hard to turn a stream cipher into a block 
cipher. The wider the block is, the more mixing you get. A one-bit change in a 
block cipher affects the whole block, and as the block's width gets larger, the 
more it approaches All-Or-Nothing Cryptography.

Tweakable ciphers in general also have huge advantages. You can think of a 
tweak as the generalization of an IV or counter. This is why a tweakable cipher 
is good for disk encryption -- because the LBN of a disk block is 
definitionally not a secret parameter. But once you have a tweakable cipher, 
there are ways that you can re-think chaining modes.

For example, you can re-think counter mode trivially by moving your counter to 
the tweak and now you don't have to worry so much about counter re-use. Yay! 
You can even throw away nonces, at the cost of having to deal with short 
trailing blocks. That's often inconvenient so you can even do something like 
take some static initial data (I am trying not to call it an IV), and encrypt 
that iteratively with an incrementing tweak, XORING the result onto your 
plaintext. Now you have all the convenience of counter mode, and can be pretty 
careless in picking your nonce and counter.

It's also pretty easy to extend this basic ida (a tweak is a generalized 
IV/nonce) and re-think any mode you care to in the tweakable world.

Now, an obvious (to me) disadvantage of Threefish per se is that it not only 
has a wide block, but a wide key. Some people might consider this an advantage, 
and really, I'm happy to lose this argument. It's my cipher, after all. But 
from an engineering standpoint, it can be inconvenient to have to transport a 
wide key around. The wider your block is, the more inconvenient that will be. 
On the other hand, there's an easy solution to this inconvenience -- a KDF. 
Heck, run your short key through Skein, and then feed that to your Threefish 
operation and poof, Alice is your auntie.

The other obvious disadvantage (to me) of Threefish is that the tweak is only 
128 bits wide. If the tweak were full-width, then it would be trivial to do 
what I handwaved above -- produce obvious tweak-based chaining modes that were 
trivially as secure as the underlying tweakable cipher. You could always hold 
your nose and just truncate to 128 bits and show 128-ish bits of security, but 
that's really unappealing at the least.

However, we could also just re-think chaining modes, as well. I am at present 
very fond of McOE mode, which is an authenticated mode. It was developed by a 
team at University of Weimar that includes my Skein/Threefish co-author, Stefan 
Lucks. The obvious search should find their paper. They designed it to work 
either with an AES-like cipher or a Threefish-like cipher, an

Re: [cryptography] the spell is broken

2013-10-03 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 3, 2013, at 7:13 AM, Jeffrey Goldberg  wrote:

Jeff,

You might call it "security theatre," but I call it (among other things) 
"protest." I have also called it "trust," "conscience," and other things 
including "emotional." I'm willing to call it "marketing" in the sense that 
marketing often means non-technical. I disagree with "security theatre" because 
in my opinion security theatre is *empty* or *mere* trust-building, but I don't 
fault you for being upset. I don't blame you for venting in my direction, 
either. I will, however, repeat that I believe this is something gentlepersons 
can disagree on. A decision that's right for me might not be right for you and 
vice-versa.

Since the AES competition, NIST has been taking a world-wide role in crypto 
standards leadership. Overall, it's been a good thing, but one could have one's 
disagreements with a number of things (and I do), but it's been a good 
*standards* process.

A good standard, however, is not necessarily the *best*, it's merely agreed 
upon. A standard that is everyone's second choice is better than a standard 
that is anyone's first choice. I don't think there are any problems with AES, 
but I think Twofish is a better choice. During the AES competition, the OpenPGP 
community as a whole, and I and my PGP colleagues put Twofish into OpenPGP 
*independently* of the then-unselected AES. It was thus our vote for it. When 
Phil, Alan, and I were putting ZRTP together, we put in Twofish as an option 
(RFC 6189, section 5.1.3). Thus in my opinion, if you know my long-standing 
opinions on ciphers, this shouldn't be a surprise. I think Twofish is a better 
algorithm than Rijndael.

ZRTP also has in it an option for using Skein's one-pass MAC instead of 
HMAC-SHA1. Why? Because we think it's more secure in addition to being a lot 
faster, which is important in an isochronous protocol. 

Silent Phone already has Twofish in it, and is already using Skein-MAC.

In Silent Text, we went far more to the "one true ciphersuite" philosophy. I 
think that Iang's writings on that are brilliant. 

As a cryptographer, I agree, but as an engineer, I want options. I view those 
options as a form of preparedness. One True Suite works until that suite is no 
longer true, and then you're left hanging.

To be fair, there are few options in ZRTP -- it's only AES or Twofish and 
SHA1-HMAC or Skein-MAC, so the selection matrix is small when compared to 
OpenPGP. We have One True Elliptic Curve -- P-384, and options for AES-CCM in 
either 128 or 256 bits and paired with SHA-256 or SHA-512 as hash and HMAC as 
appropriate. There's a third option, AES-256 paired with Skein/Skein-MAC, which 
I don't think is in the code, merely defined as a cipher suite. I can't 
remember. So we have to add Twofish there, but it's in Silent Phone now.

Now let me go back to my comment about standards. Standards are not about 
what's *best*, they're about what's *agreed*, and part of what's agreed on is 
that they're good enough. When one is part of a standards regime, one 
sublimates one's personal opinions to the collective good of the standard. That 
collective good of the standard is also "security theatre" in the sense that 
one uses it because it's the thing uses to be part of the community.

I think Twofish is better than AES. I believe that Skein is better than SHA-2. 
I also believe in the value of standards.

The problem one faces with the BULLRUN documents gives a decision tree. The 
first question is whether you think they're credible. If you don't think 
BULLRUN is credible, then there's an easy conclusion -- stay the course. If you 
think it is credible, then the next decision is whether you think that the NIST 
standards are flawed, either intentionally or unintentionally; in short, was 
BULLRUN *successful*. If you think they're flawed, it's easy; you move away 
from them.

The hard decision is the one that comes next -- I can state it dramatically as 
"Do you stand with the NSA or not?" which is an obnoxious way to put it, as 
there are few of us who would say, "Yes, I stand with the NSA." You can phrase 
less dramatically it as standing with NIST, or even less dramatically as 
standing with "the standard." You can even state it as whether you believe 
BULLRUN was successful, or lots of other ways.

Moreover, it's not all-or-nothing. Bernstein and Lange have been arguing that 
the NIST curves are flawed since before Snowden. Lots of people have been 
advocating moving to curve 25519. I want a 384-or-better curve because my One 
True Curve has been P-384.

If I'm going to move away from the NIST/NSA curve (which seems wise), what 
about everything else? Conveniently, I happen to have alternates for AES and 
SHA-2 in my back pocket, where they've been *alternates* in my crypto going 
back years. They're even in part of the software, sublimated to the goodness of 
the standard. The work is merely pulling them to the foref

Re: [cryptography] the spell is broken

2013-10-02 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 2, 2013, at 12:26 PM, coderman  wrote:

> On Wed, Oct 2, 2013 at 10:38 AM, Jared Hunter  wrote:
>> Aside from the curve change (and even there), this strikes me as a marketing 
>> message rather than an important technical choice. The message is "we react 
>> to a deeper class of threat than our users understand."
> 
> 
> it is simpler than that.  to signal integrity, and provide assurance,
> it is common not just to avoid impropriety, but to avoid the
> _appearance_ of impropriety.
> 
> this change, while not materially affecting security (the weakest link
> in SilentCircle was never the crypto) succeeds in conveying the
> message of integrity as paramount.
> 
> so yes, a marketing message, but a simple one. i have no problem with
> this as long as they're not implying that AES or SHA-2 are broken in
> some respect.

Thank you very much for that assessment.

I'm not implying at all that AES or SHA-2 are broken. If P-384 is broken, I 
believe the root cause is more that it's old than it was backdoored. 

But it doesn't matter what I think. This is a trust issue.

A friend of mine offered this analogy -- what if it was leaked that the 
government replaced all of a vaccine with salt water because some nasty jihadis 
get vaccinated. This is serious and pretty horrifying.

If you're a responsible doctor, and source your vaccines from the same place, 
even if you test them yourself you're stuck proving a negative and in a place 
where stating the negative can look like you're part of the conspiracy.

I see this as a way out of the madness. Yes, it's "marketing" if by marketing 
you mean non-technical. By pushing this out, we're letting people who believe 
there's a problem have a reasonable alternative. 

If we, the crypto community, decide that the P-384+AES+SHA2 cipher suite is 
just fine, we can walk the decision back. It's just a software change.

Let me also add that I wouldn't fault anyone for deciding differently. We, the 
crypto community, need to work together with security and respecting each 
other's decisions even if we make different decisions and do different things. 
I respect the alternate decision, to stay the course.

Jon




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSTJzTsTedWZOD3gYRAtsxAJ9CPoZjv+shNwID/ip+9KOcWK/JrQCeKuNv
rZmdU8syRIb+6KmX3xqEHt8=
=W3/0
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Random number generation being influenced - rumors

2013-09-09 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sep 8, 2013, at 10:10 PM, coderman  wrote:

> and so forth and so on, to no effect.  the lines have been drawn, and
> nothing will convince Intel to release raw access to the entropy
> source.

I have to disagree with you. Lots of us have told Intel that we really need to 
see the raw bits, and lots of us have gotten informal feedback that we'll see 
that in a future chip.

In the meantime, don't use it if you don't like it!

Better, however, would be to continue using whatever software RNG you're using, 
and reseed it with whatever you're doing now and throw an RDRAND reading in. It 
won't hurt anything no matter how badly it's broken and helps against any 
number of things. Heck, I've done that with TPM RNGs that I knew were of 
limited quality.

Once Intel better documents the RNG and we have ways to look at the entropy 
source, then we might use it more. Until then, it's somewhere between a toy and 
a curiosity.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSLcg1sTedWZOD3gYRAqbnAJ9uqS5CONA5vWYheiTrsE5C5BDXGgCeM/l/
qprr/56jYSuasPBWiRdqDHs=
=HEOP
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-29 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 29, 2013, at 3:26 PM, zooko  wrote:

> On Sat, Aug 24, 2013 at 09:18:33PM +0300, ianG wrote:
>> 
>> I'm not convinced that the US feds can at this stage order the 
>> backdooring of software, carte blanche.  Is there any evidence of 
>> that?
>> 
>> (I suspect that all their powers in this area are from pressure and 
>> horse trading.  E.g., the export of cryptomunitions needs a 
>> licence...)
> 
> I don't know. I asked a lawyer a few days ago -- a person who is, as far as I
> can tell, one of the leading experts in this field. Their answer was that
> nobody knows.

I've spoken to my own lawyers and gotten their opinions. My comments on things 
reflect my knowledge.

> So I don't think the question of "To whom is my service provider vulnerable?"
> is the right question. You can't really know the answer, so it doesn't help 
> you
> much to wonder about it. The right question is "Am I vulnerable to my service
> provider?". The answer, as far as Silent Circle's current products go, is
> "Yes.".

You are, of course, entitled to your own opinion, but I disagree. I say the 
answer is no -- or perhaps more fully, no more than yours is.

> 
> (Kudos to Jon for saying something sensical in that last one!)
> 

Thank you.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSH9OGsTedWZOD3gYRAmpvAJ94g0k2MFpyYe/e0+3Y8l5G7yna9wCgvI4n
Q8oVvYZSSVGqewclSiV4WJ8=
=e1/k
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] no-keyring public

2013-08-24 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I like it, myself.

It's very similar (as Greg Rose noted) to IBE, and thus pretty much what I did 
in:

http://middleware.internet2.edu/pki05/proceedings/callas-conventional_ibe.pdf

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSGRrusTedWZOD3gYRAvoRAJ9c7J81UGjGCDNf/DBIl7Rc5LEo6QCfQ0e6
OYVleaa37jCKuUOIlzwpsNI=
=uOk4
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 17, 2013, at 11:00 AM, Ali-Reza Anghaie  wrote:

> On Sat, Aug 17, 2013 at 1:50 PM, Jon Callas  wrote:
>> I hope I don't sound like a broken record, but a smart attacker isn't going
>> to attack there, anyway. A smart attacker doesn't break crypto, or suborn
>> releases. They do traffic analysis and make custom malware. Really. Go look
>> at what Snowden is telling us. That is precisely what all the bad guys are
>> doing. Verification is important, but that's not where the attacks come from
>> (ignoring the notable exceptions, of course).
> 
> Part of the problem is that most people can't even wrap their heads
> around what a State or non-State Tier 1 Actor would even look like.
> They bully, kill leaders, deny resources, .. heck, they kill ~users~
> to dissuade use of a given tool.
> 
> Then on the flip side "we" think about design and architectural
> aspects that don't even ever get the chance to be used against ~any~
> adversary because we force too much philosophy down into a hole that
> may have just one device, maybe just an iPhone - and limited to
> connectivity to even use it.
> 
> I've called this the problem of "Western Sensibilities" where we seem
> to forget the economics and geopolitics of the rest of the world.
> 
> Before getting heads wrapped around all these poles that are pretty
> exclusive to the "haves" - go out to truly hostile territory and live
> like a "have not" and try to build up the OPSEC routine you want,
> complete with FOSS only and full audits, and work from the field that
> way. It's non-trivial to say the least - even if you've done it a
> hundred times from a hundred different American and European venues.

I've had the privilege on several occasions to talk to people who really do 
this stuff. A couple of things really stuck with me:

* "Don't patronize us. We know what we're doing, we know what we're up 
against." The guy who told me this had his brother murdered horribly. His 
tradecraft was basic and elegant.

* Simple, usable countermeasures are best because they have to be used by the 
sort of person who decided yesterday that they're not going to take it any 
more. They're newly-minted heroes who a threat to themselves and others if they 
screw up what they're doing. We asked them what they'd like most and the answer 
was SSL on websites. This was after Diginotar and we'd been talking about 
advanced threats, so we were a bit taken aback. They explained that the biggest 
problems are people putting stuff on websites as well as mistakes like making 
calendar entries for times and places of meetings. 

That put a fine point on the admonition not to patronize them. Heck, the 
adversaries don't have to crack anything sophisticated when they can just sniff 
CalDAV.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD/qksTedWZOD3gYRAsj7AKCXuWr60RLPvsFXVtHzDGZUOS/fuwCgvK6m
6X311tAwXg+lYZD2TAOZAm0=
=C0O6
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 10:41 AM, ianG  wrote:

> Apologies, ack -- I noticed that in your post.
> 
> (And I think for crypto/security products, the BSD-licence variant is more 
> important for getting it out there than any OSI grumbles.)

Thanks. I agree with your comments in other parts of those notes that I removed 
about issues with open versus closed source. I often wish I didn't believe in 
open source, because the people doing closed source get much less flak than we 
do.

> Ah ok.  Will they be writing an audit report?  Something that will give us 
> trust that more people are sticking their name to it?

I get regular audit reports, and have since last fall. :-)

I haven't been putting them out because it felt like argument from authority. 
Hey, don't audit this yourself, trust these guys!

Moreover, those reports are guidance we have from an independent party on what 
to do next. I want those to be raw and unvarnished. If they're going to get 
varnished, I lose guidance and I also lose speed. A report that's made for the 
public is definitionally sanitized. I don't want to encourage sanitizing.

It's a hard problem. I understand what you want, but my goal is to provide a 
good service, not a good report.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFSD7+7sTedWZOD3gYRAtF4AJ4+feoP9wGq6s1Zni9ZhS6aiJx1YwCgwOiy
GHaj1lPMi8gBm3XDSvorr9U=
=HWhT
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas

On Aug 17, 2013, at 12:49 AM, Bryan Bishop  wrote:

> On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas  wrote:
> It's very hard, even with controlled releases, to get an exact byte-for-byte 
> recompile of an app. Some compilers make this impossible because they 
> randomize the branch prediction and other parts of code generation. Even when 
> the compiler isn't making it literally impossible, without an exact copy of 
> the exact tool chain with the same linkers, libraries, and system, the code 
> won't be byte-for-byte the same. Worst of all, smart development shops use 
> the *oldest* possible tool chain, not the newest one because tool sets are 
> designed for forwards-compatibility (apps built with old tools run on the 
> newest OS) rather than backwards-compatibility (apps built with the new tools 
> run on older OSes). Code reliability almost requires using tool chains that 
> are trailing-edge.
> 
> Would providing (signed) build vm images solve the problem of distributing 
> your toolchain?

Maybe. The obvious counterexample is a compiler that doesn't deterministically 
generate code, but there's lots and lots of hair in there, including potential 
problems in distributing the tool chain itself, including copyrighted tools, 
libraries, etc.

But let's not rathole on that, and get to brass tacks.

I *cannot* provide an argument of security that can be verified on its own. 
This is Godel's second incompleteness theorem. A set of statements S cannot be 
proved consistent on its own. (Yes, that's a minor handwave.)

All is not lost, however. We can say, "Meh, good enough" and the problem is 
solved. Someone else can construct a *verifier* that is some set of policies 
(I'm using the word "policy" but it could be a program) that verifies the 
software. However, the verifier can only be verified by a set of policies that 
are constructed to verify it. The only escape is decide at some point, "meh, 
good enough."

I brought Ken Thompson into it because he actually constructed a rootkit that 
would evade detection and described it in his Turing Award lecture. It's not 
*just* philosophy and theoretical computer science. Thompson flat-out says, 
that at some point you have to trust the people who wrote the software, because 
if they want to hide things in the code, they can.

I hope I don't sound like a broken record, but a smart attacker isn't going to 
attack there, anyway. A smart attacker doesn't break crypto, or suborn 
releases. They do traffic analysis and make custom malware. Really. Go look at 
what Snowden is telling us. That is precisely what all the bad guys are doing. 
Verification is important, but that's not where the attacks come from (ignoring 
the notable exceptions, of course).

One of my tasks is to get better source releases out there. However, I also 
have to prioritize it with other tasks, including actual software improvements. 
We're working on a release that will tie together some new anti-surveillance 
code along with a better source release. We're testing the new source release 
process with some people not in our organization, as well. It will get better; 
it *is* getting better.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 2:41 AM, ianG  wrote:

> So back to Silent Circle.  One known way to achieve some control over their 
> closed source replacement vulnerability is to let an auditor into their inner 
> circle, so to speak.

One correction of fact:

Our source is not closed source. It's up on GitHub and has an non-commercial 
BSD variant license, which I know isn't OSI, but anyone who wants to build, 
use, and even distribute their verified version is free to do so.

Secondly, we have auditors in the mix. We are customers of Leviathan Security 
and their "virtual security officer" program. They do regular code audits, 
network audits, and are helping us create a software development lifecycle.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD64VsTedWZOD3gYRAp5iAKDFiDEn9MyTMscvsuznSY5jS83SpACg41F3
WL8vRZBFo747yv4C1DfwFeA=
=FYfS
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Reply to Zooko (in Markdown)

2013-08-16 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Also at http://silentcircle.wordpress.com/2013/08/17/reply-to-zooko/


# Reply to Zooko

(My friend and colleague, [Zooko 
Wilcox-O'Hearn](https://leastauthority.com/blog/author/zooko-wilcox-ohearn.html)
 wrote an open letter to me and Phil [on his blog at 
LeastAuthority.com](https://leastauthority.com/blog/open_letter_silent_circle.html).
 Despite this appearing on Silent Circle's blog, I am speaking mostly for 
myself, only slightly for Silent Circle, and not at all for Phil.)

Zooko,

Thank you for writing and your kind words. Thank you even more for being a 
customer. We're a startup and without customers, we'll be out of business. I 
think that everyone who believes in privacy should support with their 
pocketbook every privacy-friendly service they can afford to. It means a lot to 
me that you're voting with your pocketbook for my service.

Congratulations on your new release of [LeastAuthority's 
S4](https://leastauthority.com) and 
[Tahoe-LAFS](https://tahoe-lafs.org/trac/tahoe-lafs). Just as you are a fan of 
my work, I am an admirer of your work on Tahoe-LAFS and consider it one of the 
best security innovations on the planet.

I understand your concerns, and share them. One of the highest priority tasks 
that we're working on is to get our source releases better organized so that 
they can effectively be built from [what we have on 
GitHub](https://github.com/SilentCircle/). It's suboptimal now. Getting the 
source releases is harder than one might think. We're a startup and are pulled 
in many directions. We're overworked and understaffed. Even in the old days at 
PGP, producing effective source releases took years of effort to get down pat. 
It often took us four to six weeks to get the sources out even when delivering 
one or two releases per year.

The world of app development makes this harder. We're trying to streamline our 
processes so that we can get a release out about every six weeks. We're not 
there, either.

However, even when we have source code to be an automated part of our software 
releases, I'm afraid you're going to be disappointed about how verifiable they 
are. 

It's very hard, even with controlled releases, to get an exact byte-for-byte 
recompile of an app. Some compilers make this impossible because they randomize 
the branch prediction and other parts of code generation. Even when the 
compiler isn't making it literally impossible, without an exact copy of the 
exact tool chain with the same linkers, libraries, and system, the code won't 
be byte-for-byte the same. Worst of all, smart development shops use the 
*oldest* possible tool chain, not the newest one because tool sets are designed 
for forwards-compatibility (apps built with old tools run on the newest OS) 
rather than backwards-compatibility (apps built with the new tools run on older 
OSes). Code reliability almost requires using tool chains that are 
trailing-edge.

The problems run even deeper than the raw practicality. Twenty-nine years ago 
this month, in the August 1984 issue of "Communications of the ACM" (Vol. 27, 
No. 8) Ken Thompson's famous Turing Award lecture, "Reflections on Trusting 
Trust" was published. You can find a facsimile of the magazine article at 
 and a 
text-searchable copy on Thompson's own site, 
.

For those unfamiliar with the Turing Award, it is the most prestigious award a 
computer scientist can win, sometimes called the "Nobel Prize" of computing. 
The site for the award is at .

In Thompson's lecture, he describes a hack that he and Dennis Ritchie did in a 
version of UNIX in which they created a backdoor to UNIX login that allowed 
them to get access to any UNIX system. They also created a self-replicating 
program that would compile their backdoor into new versions of UNIX portably. 
Quite possibly, their hack existed in the wild until UNIX was recoded from the 
ground up with BSD and GCC.

In his summation, Thompson says:

The moral is obvious. You can't trust code that you did not totally
create yourself. (Especially code from companies that employ people
like me.) No amount of source-level verification or scrutiny will
protect you from using untrusted code. In demonstrating the
possibility of this kind of attack, I picked on the C compiler. I
could have picked on any program-handling program such as an
assembler, a loader, or even hardware microcode. As the level of
program gets lower, these bugs will be harder and harder to detect.
A well installed microcode bug will be almost impossible to detect.

Thompson's words reach out across three decades of computer science, and yet 
they echo Descartes from three centuries prior to Thompson. In Descartes's 1641 
"Meditations," he proposes the thought experiment of an "evil demon" who 
deceives us by simula

Re: [cryptography] evidence for threat modelling -- street-sold hardware has been compromised

2013-07-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jul 30, 2013, at 4:07 AM, ianG  wrote:

> It might be important to get this into the record for threat modelling.  The 
> suggestion that normally-purchased hardware has been compromised by the 
> bogeyman is often poo-pooed, and paying attention to this is often thought to 
> be too black-helicopterish to be serious.  E.g., recent discussions on the 
> possibility of perversion of on-chip RNGs.
> 
> This doesn't tell us how big the threat is, but it does raise it to the level 
> of 'evidenced'.

Evidence of what, though?

The rumor isn't a new one. A bunch of government agencies dropped ThinkPads 
from approved lists when they were sold from IBM to Lenovo, and that was pure 
ooo-scary-Chinese stuff, not with any actual evidence. It's reasonable enough, 
and jibe with their general mistrust of Huawei, etc. It was a pre-emptive move 
away from ThinkPads.

That mistrust ranges from the reasonable to the quasi-reasonable to whatever. I 
can understand completely removing ThinkPads from fast track approval to 
needing testing etc. once they were sold to Lenovo in 2005. This sounds like 
nothing but rumor mongering based on that.

Evidence would be something like a Black Hat preso.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFR98MAsTedWZOD3gYRAsssAJoCqOCNwDLrIGlk0IQqj2kOL+XQTwCg7BZc
tkFk68doeFMPtaLSCDomeX0=
=Gy/J
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] post-PRISM boom in secure communications (WAS skype backdoor confirmation)

2013-06-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 30, 2013, at 12:44 AM, James A. Donald  wrote:

> Silent Circle expects end users to manage their own keys, which is of course 
> the only way for end users to be genuinely secure. Everything else is snake 
> oil, or rapidly turns into snake oil in practice.  (Yes, Cryptocat,  I am 
> looking at you)
> 
> However, everyone has found it hard to enable end users to manage keys.  User 
> interface varies from hostile, to unbearably hostile.
> 
> Silent Circle publish end users public keys, which would seem to create the 
> potential for a man in the middle attack.
> 
> I would like to see a review and evaluation of Silent Circle's key management.

This isn't quite correct. You have the gist of it, though.

Silent Phone uses ZRTP, which is ephemeral DH with hash commitments for 
continuity, in the style of SSH. The short authentication string is there for 
explicit MITM protection. There's no explicit public key.

Silent Phone uses SCIMP, which is also a EDH+hash commitment protocol, and also 
has no explicit public keys. The problem there is that unlike a voice protocol 
when you can use a voice recitation of a short authentication string, there's 
no implicit second channel in a text protocol. We're working on improvements 
there.

There's a SCIMP paper up on silentcircle.com. Please look at it.

Jon





-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFR0KhvsTedWZOD3gYRAiYEAJ4w96a0qdNjeDRAlii7qaF/dZ1TsACfUVJI
zfGnH862J4muQrTHag9sL48=
=ZqZE
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ICIJ's project - comment on cryptography & tools

2013-04-04 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Apr 4, 2013, at 6:27 AM, ianG  wrote:

> In a project similar to Wikileaks, ICIJ comments on tools it used to secure 
> its team-based project work:
> 
> "ICIJ’s team of 86 investigative journalists from 46 countries 
> represents one of the biggest cross-border investigative partnerships in 
> journalism history. Unique digital systems supported private document and 
> information sharing, as well as collaborative research. These included a 
> message center hosted in Europe and a U.S.-based secure online search system. 
>  Team members also used a secure, private online bulletin board system to 
> share stories and tips."
> 
> "The project team’s attempts to use encrypted e-mail systems such as 
> PGP (“Pretty Good Privacy”) were abandoned because of complexity and 
> unreliability that slowed down information sharing. Studies have shown that 
> police and government agents – and even terrorists – also struggle to use 
> secure e-mail systems effectively.  Other complex cryptographic systems 
> popular with computer hackers were not considered for the same reasons.  
> While many team members had sophisticated computer knowledge and could use 
> such tools well, many more did not."
> 
> 
> http://www.icij.org/offshore/how-icijs-project-team-analyzed-offshore-files
> 

Thanks!

This is great. It just drives home that usability is all.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFRXcnAsTedWZOD3gYRAlcgAJ92ntosWo+yaBYd3Q5xhyJ40lOSPQCdGHW/
5eb0jufZJKBXpu4TeYOgWmM=
=p82u
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 28, 2013, at 10:27 PM, Jeffrey Goldberg  wrote:

> There are a couple interesting lessons from LocationGate. 

[...]

> The second lesson has to do with the the status of iOS protection classes 
> that can leave things unencrypted even when the phone is locked. There are 
> things that we want our phones to do before they are unlocked with a 
> passcode. 

[...]

> 
> The trick is how to communicate this the people...

[...]

Very well put in all of those.

> What's the line? Never attribute to malice what can be explained by 
> incompetence.

That is the line. And also that stupidity is the most second most common 
element in the universe, after hydrogen. (And variants on that.)

> 
> At the same time we are in the business of designing system that will protect 
> people and their data under the assumption that the world is full of hostile 
> agents. As I like to put it, I lock my car not because I think everyone is a 
> crook, but because I know that car thieves do exist.

And in many cases a cheap lock will work because it deters and deflects, not 
because it actually prevents. This doesn't apply so much with information 
security, but I think it does in places.

For example, I think that the most important thing about a password is that it 
not be a dictionary word. If it is one, length doesn't matter. If it isn't, 
length only matters a little, because most attackers just one someone's 
password, not yours. If they do want yours, either spearphishing or malware 
like Zeus is a better bang for the buck. They won't actually bother cracking 
it, they'll go around it.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVTsEsTedWZOD3gYRAhDeAKDYJOTTA9mBBebl4ccMbAbqZQzg9ACdG7A7
XRwwSV8OBtA8JufBO4YsAJ0=
=/Olb
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 28, 2013, at 6:59 PM, Jeffrey Walton  wrote:

> On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>> 
>> [Not replied-to cryptopolitics as I'm not on that list -- jdcc]
>> 
>> On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg  wrote:
>> 
>>>> Do hardware manufacturers and OS vendors have alternate methods? For
>>>> example, what if LE wanted/needed iOS 4's hardware key?
>>> 
>>> You seem to be talking about a single iOS 4 hardware key. But each device
>>> has its own. We don't know if Apple actually has retained copies of that.
>> 
>> I've been involved in these sorts of questions in various companies that 
>> I've worked.
> Somewhat related: are you bound to some sort of non-disclosure with
> Apple? Can you discuss all aspects of the security architecture, or is
> it [loosely] limited to Apple's public positions?

- From being there, Apple's culture and practices are such that everything they 
do is focused on making cool things for the customers. Apple fights for the 
users. The users' belief and faith in Apple saved it from near death. 
Everything there focuses on how it's good for the users. Also remember that 
there are many axes of good for the users. User experience, cost, reliability, 
etc. are part of the total equation along with security. People like you and me 
are not the target,  it's more the proverbial "My Mom" sort of user.

Moreover, they're not in it for the money. They're in it for the cool. 
Obviously, one has to be profitable, and obviously high margins are better than 
low ones, but the motivator is the user, and being cool. Ultimately, they do it 
for the person in the mirror, not for the cash.

I believe that Apple is too closed-mouthed about a lot of very, very cool 
things that they do security-wise. But that's their choice, and as a gentleman, 
I don't discuss things that aren't public because I don't blab. NDA or no NDA, 
I just don't blab.


> I regard these as the positive talking points. There's no slight of
> hand in your arguments, and I believe they are truthful. I expect them
> to be in the marketing literature.
> 
>>>> I suspect Apple has the methods/processes to provide it.
>>> I have no more evidence than you do, but my guess is that they don't, for
>>> the simple reason that if they did that fact would leak out. ...
>> And that's just what I described above. I just wanted to put a sharper point 
>> on it.
>> I don't worry about it because truth will out. ...
> A corporate mantra appears to be 'catch me if you can', 'deny deny
> deny', and then 'turn it over to marketing for a spin'.
> 
> We've seen it in the past with for example, Apple and location data,
> carriers and location data, and Google and wifi spying. No one was
> doing it until they got caught.
> 
> Please forgive my naiveness or my ignorance if I'm seeing things is a
> different light (or shadow).

Well, with locationgate at Apple, that was a series of stupid and unfortunate 
bugs and misfeatures. Heads rolled over it.

- From what I have read of the Google wifi thing, it was also stupid and 
unfortunate. The person who coded it up was a pioneer of wardriving. People 
realized they could do cool things and did them without thinking it through. 
Thinking it through means that there are things to do that are cool if you are 
just a hacker, but not if you are a company. If that had been written up here, 
or submitted at a hacker con, everyone would have cheered -- and basically did, 
since arguably a pre-alpha of that hack was a staple of DefCon contests. The 
superiors of the brilliant hackers didn't know or didn't grok what was going on.

In neither of those cases was anyone trying to spy. In each differently, people 
were building cool features and some combination of bugs and failure to think 
it through led to each of them. It doesn't excuse mistakes, but it does explain 
them. Not every bad thing in the world happens by intent. In fact, most of them 
don't.

> 
> Apple designed the hardware and hold the platform keys. So I'm clear
> and I'm not letting my imagination run too far ahead:
> 
> Apple does not have or use, for example, custom boot loaders signed by
> the platform keys used in diagnostics, for data extraction, etc.
> 
> There are no means to recover a secret from the hardware, such as a
> JTAG interface or a datapath tap. Just because I can't do it, it does
> not mean Apple, a University with EE program, Harris Corporation,
> Cryptography Research, NSA, GC

Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 5:24 PM, Kevin W. Wall  wrote:

> 
> All excellent, well articulated points. I guess that means that
> RSA Security is an insane company then since that's
> pretty much what they did with the SecurID seeds. Inevitably,
> it cost them a boatload too. We can only hope that Apple
> and others learn from these mistakes.

No, RSA was careless and stupid. It's not the same thing at all.

SecurID seeds are shared secrets and the authenticators need them. They did 
nothing like what we were talking about -- handing them out so the security of 
the device could be compromised. They kept their own crown jewels on some PC on 
their internal network and they were hacked for them.

> 
> OTOH, if Apple thought they could make a hefty profit by
> selling to LEAs or "friendly" governments, that might change
> the equation enough to tempt them. Of course that's doubtful
> though, but stranger things have happened.

Excuse me, but Apple in particular is making annual income in the same ballpark 
as the GDP of Ireland, the Czech Republic, or Israel. They could bail out 
Cyprus with pocket change.

If you want to go all tinfoil hat, you shouldn't be thinking about friendly 
governments buying them off, you should be thinking about *them* buying their 
own country.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFRVPGKsTedWZOD3gYRAmKzAKDkD8/myOnUQjpSQzohZ7i3OqC6QwCeJ69T
e81n4nVL+KTK7g72TLMeHow=
=JqMQ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas

On Mar 28, 2013, at 4:07 PM, shawn wilson  wrote:

> 
> On Mar 27, 2013 11:38 PM, "Jeffrey Goldberg"  wrote:
> >
> 
> >
> > http://blog.agilebits.com/2012/03/30/the-abcs-of-xry-not-so-simple-passcodes/
> >
> 
> Days? Not sure about the algorithm but both ocl and jtr can be run in 
> parallel and idk why you'd try to crack a password on an arm device anyway 
> (there's a jtr page that compares platforms and arm is god awful slow)
> 
> 

You have to run the password cracker on the device, because it involves mixing 
the hardware key in with the passcode, and that's done in the security chip. 
You can't parallelize it unless you pry the chip apart. I'm not saying it's 
impossible, but it is risky. If you screw that up, you lose totally, as then 
breaking the passcode is breaking AES-256. And if you have about 2^90 memory, 
it's easier than breaking AES-128!

Jon




PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

[Not replied-to cryptopolitics as I'm not on that list -- jdcc]

On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg  wrote:

>> Do hardware manufacturers and OS vendors have alternate methods? For
>> example, what if LE wanted/needed iOS 4's hardware key?
> 
> You seem to be talking about a single iOS 4 hardware key. But each device
> has its own. We don't know if Apple actually has retained copies of that.

I've been involved in these sorts of questions in various companies that I've 
worked. Let's look at it coolly and rationally.

If you make a bunch of devices with keys burned in them, if you *wanted* to 
retain the keys, you'd have to keep them in some database, protect them, create 
access  controls and procedures so that only the good guys (to your definition) 
got them, and so on. It's expensive.

You're also setting yourself up for a target of blackmail. Once some bad guy 
learns that they have such a thing, they can blackmail you for the keys they 
want lest they reveal that the keys even exist. Those bad guys include 
governments of countries you operate or have suppliers in, mafiosi, etc. Heck, 
once some good guy knows about it, the temptation to break protocol on who gets 
keys when will be too great to resist, and blackmail will happen.

Eventually, so many people know about the keys that it's not a secret. Your 
company loses its reputation, even among the sort of law-and-order types who 
think that it's good for *their* country's LEAs to have those keys because they 
don't want other countries having those keys. Sales plummet. Profits drop. 
There are civil suits, shareholder suits, and most likely criminal charges in 
lots of countries (because while it's not a crime to give keys to their LEAs, 
it's a crime to give them to that other bad country's LEAs). Remember, the only 
difference between lawful access and espionage is whose jurisdiction it is.

On the other hand, if you don't retain the keys it doesn't cost you any money 
and you get to brag about how secure your device is, selling it to customers in 
and out of governments the world over.

Make the mental calculation. Which would a sane company do?

> 
>> I suspect Apple has the methods/processes to provide it.
> 
> I have no more evidence than you do, but my guess is that they don't, for
> the simple reason that if they did that fact would leak out. Secret
> conspiracies (and that's what it would take) grow less plausible
> as a function of the number of people who have to be in on it.
> (Furthermore I suspect that implausibility rises super-linearly with
> the number of people in on a conspiracy.)

And that's just what I described above. I just wanted to put a sharper point on 
it. I don't worry about it because truth will out. Or as Dr. Franklin put it, 
three people can keep a secret if two of them are dead.

> 
>> I think there's much more to it than a simple brute force.
> 
> We know that those brute force techniques exist (there are several vendors
> of "forensic" recovery tools), and we've got very good reasons to believe
> that only a small portion of users go beyond the default 4 digit passcode.
> In case of LEAs, they can easily hold on to the phones for the 20 minutes
> (on average) it takes to brute force them.

The unlocking feature on iOS uses the hardware to spin crypto operations on 
your passcode, so you have to do it on the device (the hardware key is involved 
-- you can't just image the flash) and you get about 10 brute force checks per 
second. For a four-character code, that's about 1000 seconds.

See  for 
many details on what's in iOS specifically.

Also, surprisingly often, if the authorities ask someone to unlock the phone, 
people comply. 

> 
> So I don't see why you suspect that there is some other way that only
> Apple (or other relevant vendor) and the police know about.

Yeah, me either. We know that there are countries that have special national 
features in devices made by hardware makers that are owned by that country's 
government, but they're very careful to keep them within their own borders, for 
all the obvious reasons. It just looks bad and could lead to losing contracts 
in other countries.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVNHisTedWZOD3gYRAnLPAKCA3BW64XmpIlJJL8vMIwEZ9qBQzwCcDQiJ
OvnvTSUXUdELynnYxnT0lEA=
=JuD+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 1:21 PM, ianG  wrote:

> 
> Correct me if I'm wrong, but the parity bits in DES guard the key, which 
> doesn't need correcting?  And the block which does need correcting has no 
> space for parity bits?

"Guard" is perhaps a bit strong. They're just parity bits. 

In those days, people bought parity memory, and it was worth it. As Steve says, 
hardware errors that would just happen were pretty common. 

Now, there is a little more to it than that -- remember that when Lucifer 
became DES, it was knocked down from a 64-bit key to a 56-bit key. When they 
did that, they chose to knock one bit off of each octet (note that I'm saying 
octet, not byte, because also in those days it was not presumed that "bytes" 
had eight bits) rather than have 56 packed bits.

If you do it that way, using the orphaned bits as parity is a pretty reasonable 
use for them. 

> 
> Layering was the "big idea" of the ISO 7 layer model.  From memory this first 
> started appearing in standards committees around 1984 or so?  So likely it 
> was developed as a concept in the decade before then -- late 1970s to early 
> 1980s.

Earlier than that. But arguably, the full seven layers are still aspirational, 
but the word "conceptual" was used for a long, long time. The bottom four 
layers are pretty easy to know what goes where. But what makes a protocol be in 
5, 6, or 7 is subject to debate.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVKvZsTedWZOD3gYRAsIWAKCFLl335xfo5ivgyqSAOk+PbMY5rgCeMcvd
wdXEKz5QaHIzaKwDo5uXlHg=
=SgaG
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] why did OTR succeed in IM?

2013-03-23 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2013, at 6:36 AM, Ben Laurie  wrote:

> On 23 March 2013 09:25, ianG  wrote:
>> Someone on another list asked an interesting question:
>> 
>> Why did OTR succeed in IM systems, where OpenPGP and x.509 did not?
> 
> Because Adium built it in?
> 

Yeah. And it just worked. It took me two hours to find a Jabber client that 
actually worked (Psi) and get Psi working with OpenPGP support, and even then 
it was just weird, from a UX perspective.

But there's also one other thing, and that is that there was no other real 
competitor. So:

* Greenfield advantage
* Better UX
* Better out-of-the-box experience.

Jon




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRTeWPsTedWZOD3gYRAgcxAJ9RLtQdYAsdluIKa/+hyBLDfCIVjwCg2bIq
pZT24itMJrs0CHuTSIeVm3o=
=WS8Z
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Workshop on Real-World Cryptography

2013-03-03 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 3, 2013, at 7:05 PM, Patrick Pelletier  wrote:

> 
> This article surprised me, because it could almost be read as an argument 
> against AES (or even against block ciphers in general).  Which seems to 
> contradict the common cryptographic wisdom of "just use AES and be done with 
> it."
> 
> Besides the argument about AES having timing side-channels in #9, the room 
> 101 section at the end suggests we should do away with not only CBC, but also 
> AES-GCM, which is commonly touted as the solution to CBC's woes.  (He admits 
> it was his most controversial point, and I'm curious how it was received when 
> the talk was given.)  But I believe that if we rule out both CBC and AES-GCM 
> ciphersuites in TLS, that leaves us with only RC4.  (And indeed, 
> unsurprisingly given the author, RC4 seems to be what Google's sites prefer.)

Sadly, it's more complex than that. There are a bunch of rules of thumb that 
are independent of any particular cipher. Here's a few:

* Stream ciphers are typically a seeded PRNG that XORs the pseudo-random stream 
(colloquially called a keystream, but I think would be better called an 
r-stream) onto the plaintext. Everything from Lorentz to GCM works this way. 
This means that known plaintext means known keystream. That means that if you 
reuse the keystream, then there's a cipher break and it's independent of the 
cipher construction or key size. So they are very bad to use on jobs like 
encrypting disk blocks.

* Block ciphers need chaining modes to be effective, otherwise you can get a 
codebook built up. This is why ECB is suboptimal. Every chaining mode has its 
own plusses and minuses. CBC has weaknesses when you use it in a data stream, 
as opposed to a data block. The recent SSL attacks are attacks on the chaining 
mode more than on the cipher. Don't use CBC for a data stream. Counter mode 
turns a block cipher into a stream cipher and makes it good for streams, but 
then it gets all the drawbacks of stream ciphers. If you forget that counter 
mode is no longer a block cipher but a stream cipher, you can hurt yourself. 
But similarly, we've learned that CBC is tetchy when used in a data stream.

CFB mode is kinda part stream cipher and part block cipher. It's CBC mode's 
poor relation for no good reason. There many cases where a CBC weakness 
(particularly one that boils down to a padding attack) could be fixed by using 
CFB mode. People don't though, for no good reason. There are plenty of places 
to use it -- but also look at the Katz-Schneier attack against OpenPGP, that 
was essentially an attack on CFB mode. Ironically, the easiest way to mitigate 
that attack is to compress your data before encrypting.

* Every cipher and system is going to have weak points. There are ones worth 
worrying about and ones not worth worrying about. There are even ones worth 
arguing over or even deciding that gentlepersons can disagree. There's a very 
old saying, "there ain't a lock that can't be picked" and it's true of crypto, 
too.

If you start hyperventilating about too many things, you *will* just throw your 
hands up in the air. Side channels are important. Pay attention to them. But if 
you start thinking too hard and expect perfect security, you won't do anything, 
and plaintext is always worse than ciphertext. That sounds obvious, but you 
would be surprised how hard it is for people to internalize that.

You can use PKCS#1 properly, if you know what you're doing. You can screw up 
GCM if you don't. (Personally, I don't like GCM. I think it's too tetchy. But 
I'm pretty blasé about PKCS#1, because I'm used to pouring over it to make sure 
it's done right.)

* There are many crypto problems that good engineering can paper over. There 
are many that don't really show up in the real world. There are others that 
manifest themselves for whatever reason. Engineering is hard. Don't panic.

* There is a common thing that people do that I call "engineering from 
ignorance" as opposed to "engineering from knowledge." For example, if you jump 
from AES or RC4 because of what you know about it to a cipher that hasn't been 
analyzed, you are engineering from ignorance. You're jumping from the devil you 
know to the devil you don't know. People like to do that, especially ones who 
want to live in a perfect world where ciphers have no drawbacks and there's no 
friction.

> 
> It seems like we've been told for ages that RC4 is old and busted, and that 
> AES is the one-size-fits-all algorithm, and yet recent developments like 
> BEAST and Lucky 13 seem to be pushing us back into the arms of RC4 and away 
> from AES.

What do you mean "we"? 

RC4 got a bad rep because it has some weaknesses and because a lot of people 
didn't realize that you never send a stream cipher to do a block cipher's job. 
It has some other issues, like that its construction makes it hard to 
accelerate. For a cipher of its age, it's not bad, really, assuming 

Re: [cryptography] Which CA sells the most malware-signing certs?

2013-02-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Feb 18, 2013, at 7:07 AM, Peter Gutmann  wrote:

> I've just done a quick tally of the certs posted to
> http://www.ccssforum.org/malware-certificates.php, a.k.a. "Digital
> Certificates Used by Malware".  Looks like Verisign (and its sub-brand Thawte)
> are the malware-authors' CA of choice, selling more certs used to sign malware
> than all other CAs combined.  GeoTrust comes second, and everything below that
> is in the noise.  GoDaddy, the most popular CA, barely rates.  Other CAs
> who've sold their certs to malware authors include ACNLB, Alpha SSL (which
> isn't supposed to sell code-signing certificates at all as far as I can tell),
> Certum, CyberTrust, DigiCert, GeoTrust, GlobalSign, GoDaddy, Thawte,
> StarField, TrustCenter, VeriSign, and WoSign.  Everyone's favourite whipping-
> boy CAs CNNIC and TurkTrust don't feature at all.
> 
> Caveats: These are malware certs submitted by volunteers, so they're not a
> comprehensive sample.  The site tracks malware-signing certs and not criminal-
> website certs, for which the stats could be quite different.

Interesting, but I have a raised eyebrow.

As Andy Steingruebl pointed out, there are a lot of malware certs that are 
stolen, so this data needs to be normalized against market share. Similarly 
relevant would be the CAs with significantly fewer certs there than market 
share would indicate. My former employer, Entrust, has zero certs in that 
database. What does that mean? Anything?

Why pick on the CAs at all? Frankly, the real problem with signed malware is 
that the *platforms* have the policy that equates a signature with reputation. 
That's the thing that to me is mind-bogglingly daft. It's the equivalent of the 
TSA wanting a government issued ID, because as we all know, terrorists can't 
get ID.

If you separate signatures from reputation, then anti-malware scanners can 
detect malware by a database of known malware signatures, and then infer 
upwards from a piece of malware to a key owned by or suborned by a malware 
author. They could conveniently kill malware by code signature or signing cert, 
as appropriate. They could even go beyond malware to disable things like known 
buggy or exploitable versions of software. I don't see why they aren't doing 
that now. They don't even need the platform makers to play along.

An alliance of the platforms and the anti-malware people would make it 
unnecessary to even have a CA-issued code signing cert.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRIrpGsTedWZOD3gYRAs9gAKDtpTwIOjAIRCxfhcDubT2i/4whXACg6BHa
Mrh87nc4QUybQUCxAbLX1/Y=
=kgfC
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] "Meet the groundbreaking new encryption app set to revolutionize privacy..."

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am separating this from my previous as I went into a rant.

As we were designing Silent Text, we talked to a lot of people about what they 
needed. I don't remember who told me this anecdote, but this person went over 
to a colleague's office after they'd been texting to just talk. They walked 
into the colleagues office and noticed their phone open with a conversation 
plainly visible with someone else. A third party who was their mutual colleague 
was texting about that meeting.

In short: Alice goes to Bob's office for a meeting and sees texts from Charlie 
about that meeting, including comments about Alice.

There wasn't anything untoward about the texting. No insults about Alice or 
anything, but there was an obvious privacy loss here. What if it *had* been 
included an intemperate comment about our Alice? Alice said nothing about it to 
Bob, but I got an earful. That earful included the opinion that the threat of 
accidental disclosure of messages within a group of people is greater than 
either the messages "being plucked out of the air" or seizure and forensic 
groveling over the device. Alice's opinion was that when people have a secure 
communications channel, they loosen up and say things that are more dramatic 
than they would be otherwise. It's not that they're more honest, they're less 
honest. They're exaggerated to the point of hyperbolic at times. Alice said 
that she knew that she'd texted some things to Bob that she really wouldn't 
want the person she'd said them about to see them. They were said quickly, in 
frustration, and so on. It's not that they'd be taken out of context, it's 
 that they'd be taken *in* context.

It's interesting underlying the story, Alice suddenly saw Bob not as an ally in 
snark, but a threat -- the sort of person who leaves their phone unlocked on 
their desk. Bob, of course, would say something like that if the texts had been 
potentially offensive, he'd have locked his phone. This explanation would thus 
convince Alice that Bob is *really* not to be trusted with snark.

This is incredibly perceptive, that the greatest security threat is not the 
threat from outside, it's the threat from inside. It is exactly Douglas Adams's 
point about the babelfish that by removing barriers to communication, it 
created more and bloodier wars than anything else.

That's where "Burn Notice" came from. It's a safety net so that when Charlie 
texts Bob, "I'm tired of Alice always..." it goes away.

What I find amusing is the reaction to it all around. There's a huge 
manic-depressive, bimodal reaction. Lots of people get ahold of this and 
they're like girls who've gotten ahold of makeup for the first time. ZOMG! You 
mean my eyelids can be PURPLE and SPARKLY? This is the same thing that happens 
when people discover font libraries or text-to-speech systems. For a couple of 
days that someone gets the new app, there's nothing but text messages that are 
self-destructing, purple, sparkly eyelids with font-laden Tourette's Syndrome 
with the Mission Impossible theme song playing in the background. (Note, if you 
are using Silent Text, you can't actually make the text purple, nor sparkly, 
nor change fonts. You need to put all of that in a PDF or an animated GIF -- 
and you will. This is a metaphor, not a requirements document.)

The next thing that happens is that they are so impressed with some 
particularly inspired bit self-desctructing childishness that they take a 
screen shot. As they gaze at the screen shot, or sometimes just as they take 
the screen shot, light dawns. Oh. You mean Oh. Then the depressive phase 
kicks in.

Back in the dark ages, PGP had the "For Your Eyes Only" feature. This is pretty 
much the ancestor of Burn Notice. Simultaneously useful and worthless. It's 
useful because it signals to your partner that this is not only secret but 
sensitive and does something to stop accidental disclosure. It is utterly 
ineffective against a hostile partner for many of the same reasons. We did all 
sorts of silly things with FYEO that included an anti-TEMPEST/Van Eck font, and 
other things. Silent Text actually has an FYEO feature that isn't exposed, 
thank heavens.

I mention all of that because once you're in the depressive phase, it's easy to 
go down the same rathole we did with FYEO. I spent time researching if you can 
prevent screen shots on iOS (you can't). I did this while telling people that 
it was dumb because I can take a picture of my iPhone with my iPad. I held up 
my phone to video chat and said, "Here, see this? This is what you can do!"

Sanity prevailed, but I think that fifteen years of FYEO helped a lot. When you 
stare into self-destructing messages, trying to figure out how make them really 
go away flawlessly, they stare back. You will end up trying to figure out how 
to do a destructive two-phase commit, what class libraries need to be patched 
so those that non-mutable strings inherit 

Re: [cryptography] "Meet the groundbreaking new encryption app set to revolutionize privacy..."

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for your comments, Ian. I think they're spot on.

At the time that the so-called Arab Spring was going on, I was invited to a 
confab where there were a bunch of activists and it's always interesting to 
talk to people who are on the ground. One of the things that struck me was 
their commentary on how we can help them.

A thing that struck me was one person who said, "Don't patronize us. We know 
what we're doing, we're the ones risking our lives." Actually, I lied. That 
person said, "don't fucking patronize us" so as to make the point stronger. One 
example this person gave was that they talked to people providing some social 
meet-up service and they wanted that service to use SSL. They got a lecture how 
SSL was flawed and that's why they weren't doing it. In my opinion, this was 
just an excuse -- they didn't want to do SSL for whatever reason (very likely 
just the cost and annoyance of the certs), and the imperfection was an excuse. 
The activists saw it as being patronizing and were very, very angry. They had 
people using this service, and it would be safer with SSL. Period.

This resonates with me because of a number of my own peeves. I have called this 
the "the security cliff" at times. The gist is that it's a long way from no 
security to the top -- what we'd all agree on as adequate security. The cliff 
is the attitude that you can't stop in the middle. If you're not going to go 
all the way to the top, then you might as well not bother. So people don't 
bother.

This effect is also the same thing as the best being the enemy of the good, and 
so on. We're all guilty of it. It's one of my major peeves about security, and 
I sometimes fall into the trap of effectively arguing against security because 
something isn't perfect. Every one of us has at one time said that some 
imperfect security is worse than nothing because it might lull people into 
thinking it's perfect -- or something like that. It's a great rhetorical 
flourish when one is arguing against some bit of snake oil or cargo-cult 
security. Those things really exist and we have to argue against them. However, 
this is precisely being patronizing to the people who really use them to 
protect themselves.

Note how post-Diginotar, no one is arguing any more for SSL Everywhere. Nothing 
helps the surveillance state more than blunting security everywhere.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRFVFhsTedWZOD3gYRAjX5AKCw+SBcR1TDlDuPorgri2makt30wACgs3iI
2f+SwEqjbAVyPhf9SH67Aa8=
=tB7/
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] "Meet the groundbreaking new encryption app set to revolutionize privacy..."

2013-02-06 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Feb 6, 2013, at 3:35 PM, Jeffrey Walton wrote:

> On Wed, Feb 6, 2013 at 7:17 AM, Moti  wrote:
>> Interesting read.
>> Mostly because the people behind this project.
>> http://www.slate.com/articles/technology/future_tense/2013/02/silent_circle_s_latest_app_democratizes_encryption_governments_won_t_be.html
> 
> No offense to folks like Mr. Zimmermann, but I'm very suspect of his
> claims. I still remember the antithesis of the claims reported at
> http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/.
> 
> I'm also suspect of "... the sender of the file can set it [the
> program?] on a timer so that it will automatically “burn” - deleting
> it [encrypted file] from both devices after a set period of, say,
> seven minutes." Apple does not allow arbitrary background processing -
> its usually limited to about 20 minutes. So the process probably won't
> run on schedule or it will likely be prematurely terminated. In
> addition, Flash Drives and SSDs are notoriously difficult to wipe an
> unencrypted secret.
> 
> Perhaps a properly scoped PenTest with published results would ally my
> suspicions. It would be really bad if people died: "... a handful of
> human rights reporters in Afghanistan, Jordan, and South Sudan have
> tried Silent Text’s data transfer capability out, using it to send
> photos, voice recordings, videos, and PDFs securely."

No offense is taken. You don't even need a pen test. I'll tell you how it works.

There's no magic there. Every message that we send has metadata on it that is a 
timeout. The timer starts when you get the message. So if I send you a seven 
minute timeout while you're on an airplane, the seven minutes starts when you 
receive the message.

And you are correct, the iOS app model doesn't allow background tasks, so if 
you switch away from the app for an hour, the delete doesn't happen until you 
switch back to the app. Until Apple lets us do something in the background, 
we're stuck with that limitation. It's that simple. We hope to do better on 
Android. And if someone from Apple happens to be listening in, we'd love to be 
able to schedule some deletions.

Deleting the things, however, is trivial. This is a place that iOS shines. 
Every file is encrypted with a unique key and if you delete the file, it is 
cryptographically erased. You're correct in that flash *is* notoriously 
difficult to wipe unencrypted secrets. Fortunately for us, all the flash on iOS 
is encrypted and the crypto management is easy to use.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFRE1VKsTedWZOD3gYRAvfHAJ0dd9tSABRZkJxtdM4QbcI+d/jQqACgnPN7
nZ0rsFPcGCU9KNQEqSu70HU=
=nsyj
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-12 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jan 12, 2013, at 1:27 AM, ianG wrote:

> Oh, I see.  So basically they are breaking the implied promise of the https 
> component of the URL.
> 
> In words, if one sticks https at the front of the URL, we are instructing the 
> browser as our agent to connect securely with the server using SSL, and to 
> check the certs are in sync.
> 
> The browser is deciding it has a better idea, and is redirecting that URL to 
> a cloud server somewhere.
> 
> (I'm still just trying to understand the model.  Yes, I'm surprised, I had 
> never previously heard of this.)

I suppose you can look at it as "breaking the implied promise." You can also 
look at it as a service.

Many of these systems work in an environment where connectivity is very 
expensive. In such an environment, saving money by having someone filter your 
HTTP comes with the cost that you have to trust them not to do bad things with 
your data.

But if you get into a cab, you're trusting them not to drive you into oncoming 
traffic. If that threat bothers you, don't take a cab. Every time you eat in a 
restaurant, you're trusting them to have reasonable food safety practices and 
not spit on your food. If that bothers you, don't do that.

> 
> 
> 
>> That can be converted pictures, edits to the HTML proper, and so on.
>> 
>> The security characteristics are a mixed bag. They can send smaller 
>> pictures, scan for malware, but obviously they can't process your SSL 
>> connections. So they send the URL to the cloud server, make the SSL 
>> connection, and then send you the optimized page over SSL.
> 
> One could interpret the browser as being a combined service between the 
> client on the phone, and the cloud support services, sure.
> 
> I think this interpretation would be unusual to any ordinary user.  At a 
> contractual level, it would also need to be agreed by both ends.  We can 
> easily ensure the end-users' agreement by means of the phone agreement, but 
> it is less easy to imply the banks' agreement.

In some parts of the world and under some conditions, it's *usual*. The network 
is bad and expensive. It's really easy for us rich Westerners who can afford 
data roaming plans and travel SIMs to go into high dudgeon over it. I share 
your disdain, but my disdain is similar to my disdain for payday check cashing 
places etc. I don't approve. I understand, but I don't approve.

> 
> And, if a security case were to result in a bank being held for damages, it 
> could easily expand to Nokia.  Given the complexity of modern day online 
> banking sites (that's a kind description) I can't see how they could be agile 
> enough to avoid making mistakes.

Sure. Nokia is taking a risk, as is Opera (who supply that browser). That risk 
is mitigated by a click-through license that no one reads, but heck, someday 
some judge is going to hack up a hairball on click-throughs.

> 
> Yes, ok, it's not an attack if there isn't an attacker.  Or more generally, 
> is it an attack when the attack is done by self?  "We have met the enemy, and 
> he is us."

Exactly, and the answer is no. It's a service voluntarily offered and 
subscribed to (for some suitable definition of the word "voluntary").

> 
> So more properly, it might be a breach-of-contract issue, where the contract 
> to provide a browser that does the 'right thing' has been breached (in the 
> view of the outraged).
> 
> Nokia will argue that their contract is clearly expressed, they can do this 
> and they claim so in their contract.  OK.
> 
> Question remains -- what to make of a vendor that does tricksy things with 
> the implied secure browsing contract?

Well, that's like the difference between a short-term loan person who does 
something tricksy with the interest rate. There's a big smear from accepted to 
dodgy to unfair to evil. 

> 
> If Nokia can do this, can the other vendors?  Why can't Firefox and Chrome 
> start clouding the https connection?

They could, sure. As I pointed out, Google Reader is almost the same sort of 
thing, but is an RSS reader. I have quibbles with them, and my quibbles are 
actually the opposite. Amazon Silk does pretty much the same thing as the 
Nokia/Opera thing. A lot of pixels have been spilt over it. I don't use Silk, 
but I don't think Amazon are evil for offering it. I don't think the people who 
use it are either stupid or dupes. It's just not my thing.

(The quibble I have is over partial security. My quibble is that lots of 
partial security systems label the partial security as being worse than no 
security. I believe that partial security is always better than no security.)

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ8b6MsTedWZOD3gYRAvfNAKDU1sQjOqV+8SRzHWzg1sBYbGZ+tACgoFhi
78lRhcT0rG+0afgTRktaII4=
=TPRD
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://li

Re: [cryptography] yet another certificate MITM attack

2013-01-11 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jan 10, 2013, at 4:47 PM, Peter Gutmann wrote:

> Jon Callas  writes:
> 
>> Others have said pretty much the same in this thread; this isn't an MITM 
>> attack, it's a proxy browsing service.
> 
> Exactly.  Cellular providers have been doing this for ages, it's hardly news.
> 
> (Well, OK, given how surprised people seem to be, perhaps it should be news 
> in 
> order to make it more widely known :-).

Yes. I wouldn't use such a service, and it's why I installed Firefox and Chrome 
on my Fire. I'm not going use Silk. I admire the technology, but it's not for 
me.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ8GtBsTedWZOD3gYRArZAAKDMS6s0vPmZ2Gfg3UHzurfDRoAecACfVmsz
BDhSNZIO/eXyi5wdJxOhFRw=
=534L
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-10 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Others have said pretty much the same in this thread; this isn't an MITM 
attack, it's a proxy browsing service.

There are a number of "optimized" browsers around. Opera Mini/Mobile, Amazon 
Silk for the Kindle Fire, and likely others. Lots of old "WAP" proxies did 
pretty much the same thing. The Nokia one is essentially Opera.

These optimized browsers take your URL, process it on their server and then 
send you back an "optimized" page. That can be converted pictures, edits to the 
HTML proper, and so on.

The security characteristics are a mixed bag. They can send smaller pictures, 
scan for malware, but obviously they can't process your SSL connections. So 
they send the URL to the cloud server, make the SSL connection, and then send 
you the optimized page over SSL.

Some of these browsers let you turn off the "optimizations" for SSL pages. The 
Amazon Silk browser does. 

You can find information about Opera at:



Here's articles with various concerns about Silk:





They're not doing certificate hinkiness. They are straightforward cloud 
services, or perhaps more formally proxy services. Heck, Google Reader is more 
or less the same thing, itself, albeit as an RSS reader than a web browser.

If one wants to get upset about them, there's plenty to grumble over. There's 
the explicit security concerns, concerns about tracking, concerns about 
misrepresentation to the users about what's really going on, and so on. The 
meta concern that smart people like us are even discussing it is also a 
security concern.

But they provide services that some people find valuable. I don't use them, but 
I wouldn't even call them a MITM, myself. When we say "MITM" we're eliding the 
word "attack." It's not an attack.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFQ71XksTedWZOD3gYRAoShAKDyXR3LPirRscaxA1RDTPQFrjl/jgCgpiMF
TMyJCoC77oZ9uaaWWomVuEg=
=f2UH
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How much does it cost to start a root CA ?

2013-01-05 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm really glad you asked this question. It gives me to tell a story I've 
wanted to tell for some time. I know the answer to your question because I've 
done it.

Some years ago, PGP Corporation toyed off and on with the idea of becoming a 
CA. We looked at ways to get there through the side door, like buying the 
assets of some company that was going out of business, and managed to be too 
little, too late.

So after a lot of dithering, we started a project to create a CA from scratch. 
I led the project and it had a budget of US$250K. I code-named the project 
Casablanca. Partially because Casablanca begins and ends with a CA, but mostly 
because I really like the phrase, "I am shocked, shocked that PGP is issuing 
X.509 certificates." 

The process for setting up a CA is straightforward and exacting. You have to 
have physical and logical controls on things, dual-authentication and 
separation of duties on just about everything, but it's straightforward. You 
have to write a lot of documents, create a lot of procedures, and have all of 
that audited. You have to get audited regularly and often as you start out, and 
then the audits taper off after you show that you're running a tight ship. 

The main thing you're looking to do is to pass the WebTrust audit and 
associated practices that the platforms will require you to do. Microsoft has 
the most mature process. They have a set of rules and guidelines. If you follow 
them, you're in. One of those, by the way, is that you have to be a retail CA, 
as opposed to an internal one or a government one. It's best to work with 
Microsoft first, and once you're in their root program move to the others. They 
are fair, disciplined, and helpful. Most of all, once you've gone through all 
that, it's easier to get into the other important root stores.

If you go into this business with the attitude that you're doing a job that 
protects the Internet at large, defends the public trust, and so on, then 
you'll find the requirements completely reasonable and easy to do. 

Now that $250K that I spent got an offline root CA and an intermediate online 
CA. The intermediate was not capable of supporting workloads that would make 
you a major business. You need a data center after that, that supports the 
workloads that your business requires. But of course, you can grow that with 
your customer workload, and you can buy the datacenter space you need.

The costs got split out to about 40% hardware, etc. and 60% people. It does not 
include the people costs of the internal PGP personnel who worked on it. I 
raided part time help from around the company. It took about fourteen months 
from start to end.

PGP bought an existing company, TrustCenter. TrustCenter was the remaining end 
of GeoTrust (spun out Equifax) that Verisign did not buy. The plan was that the 
PGP-branded Casablanca roots would be put into the TrustCenter machinery and 
datacenters, and then you have a major CA. That got interrupted by Symantec 
buying PGP and then buying Verisign. Casablanca is now rolled up into their 
Norton CA business along with Verisign and Thawte, GeoTrust, etc.

There are rumors, which you've read here about how there are lots of 
underhanded obstacles in the way of becoming a CA. My experience is that the 
only underhanded part of the industry is that no one in it dispels the rumors 
that there are underhanded obstacles in your path. This is pretty much the 
first time I have, so I suppose I'm as guilty as anyone else.

Furthermore, there are lots of overblown rumors about the CA/Browser Forum. You 
don't have to be a Forum member to be a CA. If you plan to issue EV 
certificates, you have to follow the EV guidelines which are produced by the 
CA/Browser Forum, but that is because the platforms won't put your EV root in 
their stores unless you do. You don't have to be a member of the Forum to be a 
CA. As a matter of fact, there are a large number of CAs that are not members.

The situation is similar to Internet protocols and the IETF. If you want to 
make routers, you don't have to be a member of the IETF. You *will* have to 
follow IETF documents, but you don't have to participate. Obviously, there are 
advantages in participating, but there are also costs.

I was involved in the CA/Browser Forum for a few years, first with Apple (on 
the browser end) and then with Entrust (on the CA end). I heard the stories 
about how it's a cartel, etc. At PGP, we had no plans to be members because we 
had no interest in being part of a cartel. It was a huge disappointment to be 
there and find out that it isn't a cartel at all, it's a volunteer organization 
that handles lots of the rough edges of web PKI with the same combination of 
spurts of efficiency and spurts of fecklessness that you find in just about any 
organization that tries to get a bunch of organizations with different goals to 
work together.

Presently, the Forum is reorganizi

Re: [cryptography] Tigerspike claims world first with Karacell for mobile security

2012-12-27 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Dec 27, 2012, at 13:46, Jeffrey Walton  wrote:

> On Thu, Dec 27, 2012 at 1:35 PM, Ben Laurie  wrote:
>> On Thu, Dec 27, 2012 at 9:18 AM, Russell Leidich  wrote:
>>> there are plenty of Googleable papers showing the Counter Mode is weak
>>> relative to (conventional) cipher-block-chaining (CBC) AES.
>> 
>> Really? For example?
> I believe CTR mode is especially sensitive to key/nonce reuse. But you
> don't see the problem until you look at messages over time and space.
> Confer: CTR mode uses a predictable counter, while CBC mode uses a
> random (not unique) IV.
> 
> I could be wrong since I'm working from memory (it sucks getting old).
> I'd need to get into the literature to give you anything useful
> (citable).

Not really, and kinda sorta at the same time. 

Counter mode is a stream cipher. The general construction of a stream cipher 
where you generate a key stream and stamp the key stream onto plaintext (XOR 
being the usual stamping function) is vulnerable to key reuse. If you reuse a 
key, then you can use known plaintext to extract the key stream and then 
decrypt other message. Even one time pads are vulnerable to this (hence the 
necessity to use them only once).

Counter mode is not more vulnerable than any other stream cipher to the fatal 
error of key reuse. So the major quibble is with the word "especially" only. 
Don't reuse keys. But on the other hand, if you follow the simple procedure of 
getting you keys for your RNG, it isn't worth worrying any more about. 

If you assume a decent block cipher (and no key reuse), then you could always 
start counter mode from a fixed point (like zero). Breaking counter mode is 
simply a known plaintext attack on the cipher, and a decent cipher ought to be 
resilient against known plaintext and an unknown key. If you make the counter 
be a nonce, it helps things some. Less if it is a public parameter, more if it 
is private, but if it really matters, you need a new block cipher -- or stop 
reusing your keys. It is foolish either to use a nonce as a way to cover for 
errors in the cipher or key selection. That doesn't mean nonces are bad, any 
more than icing is bad on a cake. 

CBC mode has the improvement over counter mode that it uses the plaintext as an 
input variable into the encryption (as does CFB, just differently). You can 
think of CBC as XORing the plaintext onto the counter (or nonce) that is the 
Initialization Vector and then encrypting that. CFB encrypts the nonce like CTR 
and XORs onto plaintext to yield cipher text. Each of them then uses the cipher 
text as the new nonce/counter/whatever.

In either of them, the IV (nonce) ought to be arbitrary, which really only 
means that it should be random, but it isn't a secret. Entangling the plaintext 
into the operation makes key reuse mildly less awful than with a straight 
stream cipher, but it is still a fatal error. If you ever have any doubt about 
how horrible it is to reuse a key, go read or re-read the Venona papers and see 
what a motivated attacker can do to exploit key reuse. 

On the other hand, good nonce management is good hygiene and stream ciphers are 
much more delicate than block ciphers. But bad nonce management, reusing keys, 
and known plaintext are just asking for trouble no matter what. CBC mode has 
its own special vulnerabilities, too. The BEAST attack on SSL exploited those, 
and the "fix" against that is to use RC4 (a stream cipher). 

Anyway, key reuse is just bad. Counter mode with a naive counter is maximally 
bad, but CBC with a naive IV isn't a whole lot batter. CBC mode uses any 
unknownness in the plaintext to maximal effect. The bottom line is never reuse 
a key (sorry for keeping repeating this), nonces are good, and there are 
reasons we tend to use block ciphers over stream ciphers these days. And of 
course a skilled programmer can screw up the best cryptography.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ3M+osTedWZOD3gYRAl1jAKDmtHpIRptLV2FuMQ2knEKQAfhALACfVgNx
MHyxs+zwONPxgfqF8c9MdGs=
=hehH
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Tigerspike claims world first with Karacell for mobile security

2012-12-26 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I took a look at it. Amusing. I didn't spend a lot of time on it. Probably not 
more than twice what it took me to write this.

It has an obvious problem with known plaintext. You can work backward from 
known plaintext to get a piece of their "tumbler" and since the tumbler is just 
a big bitstring, work from there to pull out the whole thing.

The encrypted Karacell file format has 64 bits that must decrypt to zero. Since 
encryption is an XOR onto a pseudo-one-time-pad, this leaks 64 bits of the 
tumbler. Similarly, the "checksum" at the end is a bunch of hash blocks of 
their special hash all XORed together. This doesn't work against malicious 
modificationp; you can cut-and-paste through XOR, etc.

There are obvious vulnerabilities to linear and differential cryptanalysis. It 
is a lot of XORing on large-ish fixed longterm secrets with only bit-rotating 
through the secrets, and between the vulnerabilities of known plaintext as well 
as the leaks in it, I don't see a lot of long-term strength. I bet that you can 
use known structure of plaintext (like that it's ASCII/UTF8, let alone things 
like known headers on XML files) to start prying bits out of the tumblers and 
you just work backwards. 

But beyond that, it isn't even particularly fast. Since it needs a lot of bit 
extraction and rotations, I doubt it would be as fast as AES on a processor 
with AES-NI instructions. The whole thing is based on doing 16-bit calculations 
and some bit sliding; I don't expect it to be as fast as RC4 or some of the 
fast estream ciphers.

Obviously, I could be missing something, but there are other errors of art that 
lead me to think there isn't a lot here. For example, if your basic encryption 
system is to take a one-time-pad and try to expand that out to more uses, zero 
constants are errors of art. You should know better. There are similar errors 
like easily deducible parameters that give more known plaintext. The author 
discusses using a text string directly as a key, which is very bad with his 
expansion system. He invented his own "message digest" functions, and they look 
like complete linear functions to me. They're in uncommented C that's light on 
indenting and whitespace. Confirmation bias might be making me miss something, 
but it's not like he made it easy for me.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ225dsTedWZOD3gYRArauAKC5vrbr9HKPd0a0NoXL+eVQq428uQCgiiFE
GFlyVpZAY6w80CBqxXl2qHs=
=gncJ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why using asymmetric crypto like symmetric crypto isn't secure

2012-11-04 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Nov 3, 2012, at 7:03 PM, Peter Gutmann  wrote:

> Jon Callas  writes:
> 
>> Which immediately prompts the question of "what if it's long or secret?" [1]
>> This attack doesn't work on that.
> 
> The "asymmetric-as-symmetric" was proposed about a decade ago as a means of
> protecting against new factorisation attacks, and was deployed as a commercial
> product.  I don't recall them keeping the exponent secret because there wasn't
> any need to... until now that is.  So I think Taral's comment about not using
> crypto in novel ways is quite apropos here, the asymm-as-sym concept only
> protected you against the emergence of novel factorisation attacks (or the use
> of standard factorisation attacks on too-short keys) as long as no-one
> bothered trying to attack the public-key-hiding itself.

Point taken. I'm being too grumpy. 

I think this is a brilliant result because it gives us a "see, see" reference 
to give to people.

I'm big on sneering at proofs of security because they often do not relate to 
real security in the real world in ways that upset me (a guy whose degree is in 
mathematical logic) to my core. If you want the same sort of rigor that math 
has, security is useless.

On the other hand, and Hal Finney drove this home to me many times, they do 
tell you what sort of things you can ignore. 

This one is great because of the way it slaps intuition around.

> 
>> If you believe that the only attack against RSA is factoring the modulus,
>> then you can be seduced into thinking that hiding the modulus makes the
>> attacker's job harder. 
> 
> Yup, and that was the flaw in the reasoning behind the keep-the-public-key-
> secret system.  So this a nice textbook illustration of why not to use crypto
> in novel ways based purely on intuition.

There are all sorts of things people do based on an intuition. Hell, I've done 
them. Sometimes they just present themselves. If I had a protocol that didn't 
expose public keys (suppose they're all wrapped in a secure transfer), I might 
point out that hey, this system has hidden RSA keys. But this points out that 
unless there is a lot of extra work you do, you didn't do squat. It also 
suggests that the conservative engineering approach, which is to say that 
unless you can characterize added security it's just fluff, has new backing in 
fact.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQluTIsTedWZOD3gYRAvvGAKDAGkbALD3jqLq8kmG7VIXWtJ2sWACfWOwG
DFFKn3LjBEqvpwv4lqHYn04=
=G0xh
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why using asymmetric crypto like symmetric crypto isn't secure

2012-11-03 Thread Jon Callas
> In the past there have been a few proposals to use asymmetric cryptosystems,
> typically RSA, like symmetric ones by keeping the public key secret, the idea
> behind this being that if the public key isn't known then there isn't anything
> for an attacker to factor or otherwise attack.  Turns out that doing this
> isn't secure:
> 
>  http://eprint.iacr.org/2012/588
> 
>  Breaking Public Keys - How to Determine an Unknown RSA Public Modulus
>  Hans-Joachim Knobloch
> 
>  [...] We show that if the RSA cryptosystem is used in such a symmetric
>  application, it is possible to determine the public RSA modulus if the
>  public exponent is known and short, such as 3 or F4=65537, and two or more
>  plaintext/ciphertext (or, if RSA is used for signing, signed
>  value/signature) pairs are known.

Great paper, however, the conclusions here and in replies are not quite right. 
The paper itself says,

it is possible to determine the public RSA modulus if the public exponent is 
known and short, such as 3 or F4=65537, 


Which immediately prompts the question of "what if it's long or secret?" [1] 
This attack doesn't work on that.

What it tells you is that if for some strange reason, you are going to keep the 
public key secret, you need to make the exponent part of the secret. That's the 
real, real lesson here -- an RSA key has an exponent and a modulus and unless 
the exponent is secret, the key isn't secret. And of course secret doesn't mean 
the usual ones just put in a cabinet.

And for us logic weenies, he does not show that a secret public key is 
insecure. He shows that there is no added security for secret public keys where 
the exponent is known and short. Those keys are just as secure as they would be 
if they had known public keys (which could be not at all).

The danger is not using a public key algorithm in a novel way, it's using it in 
a novel way and thinking that your intuition is correct. It's thinking through 
the consequences of your actions.

If you believe that the only attack against RSA is factoring the modulus, then 
you can be seduced into thinking that hiding the modulus makes the attacker's 
job harder. The brilliance of this paper is that is concisely shows that unless 
you take care is selecting an exponent, the modulus leaks easily. 

Obviously, a secret public key isn't *less* secure. (The reductio ad absurdum 
is left as an exercise for the reader.) It must be as secure or greater. But if 
it's greater, by how much and how would you know? If you can't answer that 
question, or at least handwave in the direction of an answer.

If you don't have a lower bound on the improved security of that tweak, then 
you should consider it to be zero. This is why although it's still left open as 
to whether a truly secret public key adds security, we should assume there's no 
added security.

The engineering dope-slapping that needs to happen is over getting distracted. 
Security systems are designed to meet certain assumptions. Changing the 
assumptions changes the result. Public-key cryptosystems are designed in such a 
way that the public key is a public parameter. They are not designed to have 
added security when the public key is secret. This paper shows a case in which 
there is no added security, and as a matter of fact, the modulus leaks from the 
ciphertext.

If you want to make the public key secret, you have to do more work and there's 
no indication of how much added security there is -- it could be zero. No one 
has ever done a keygen with any work done into considering the care you need to 
make the exponent be a secret parameter. On the contrary, it's usually a 
quasi-constant.

All that added work could be put somewhere else, and as we all know there's 
plenty of ways to induce bugs by doing the extra work. Therefore, in the words 
of Elvis Costello, don't get cute. If you use reasonable parameters in 
off-the-shelf subsystems, you work just fine. Getting cute at best adds in some 
undefinable bit of good-feeling, which isn't the same thing as security.

Jon

[1] Operationally, long or secret will be long *and* secret because there are 
no commonly used long exponents, and all the common exponents are short. 
Phrased another way, the short exponents are easily iterated over.

PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] hashed passwords, iteration counts, and PBKDF2

2012-10-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 31, 2012, at 1:58 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

> * PGP Signed by an unknown key
> 
> Thinking out loud;
> 
> One reason why PBKDF2 requires the original password is so that you don't 
> repeatedly
> hash the same thing, and end up a "short cycle", where e.g. hash(x) = x.  At 
> that
> point, repeated iterations don't do anything.
> 
> I just realized, you don't necessarily need to put the original password in; 
> you
> could just hash something else that varies to keep it out of a short cycle; 
> for
> example, the round number.
> 
> This would allow you to update an iteration count post-facto without knowing 
> the
> original password.  Would it break any security goals?

Almost certainly not. There aren't proofs of security, but I can wave my hand 
at some.

This basic technique is something that a number of modern (SHA-3 etc.) hash 
functions do. It's more or less what Skein does.

Consider what you're doing as creating a hash function with a compression 
function that is your base hash function (which is likely to actually be a 
keyed HMAC), and then you chain it with a counter that provides uniqueness per 
iteration.

Skein takes the Threefish tweakable cipher as its compression function and uses 
a counter and other stuff in the tweak to create the UBI chaining mode which 
has per-chunk uniqueness to get some security guarantees.

There's a handwave. If you are indeed using HMAC with a small quantity of 
smarts, you can almost certainly chain the HMAC proofs into a proof of security.

It's trivially no weaker than the base PRF/compression-function, which if it's 
an HMAC is not bad, security-wise. I can think of some ways to screw it up, but 
I think those imply a drastic weakness in either the underlying base hash 
function or HMAC itself. Even those can probably be papered over with a 
Luby-Rackoff argument that enough rounds covers all sins. If you're doing a few 
tens of thousands of rounds (which is just a good idea with PBKDF2), I'm sure 
that you can end up with a security floor that is much greater than the entropy 
in the password itself (which is going to have lots of suck -- you're lucky to 
*ever* get over 32 bits, and only an insane person would be much over 64, and 
even those are likely to be illusory).

In short, it sounds okay to me. I'm sure you can screw it up if you try, but it 
sounds okay to me.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQkaC1sTedWZOD3gYRAtSmAKCuQSeeeq2uwuVDx9S7T/6wQquW7QCeJwH0
Tox5gJds6vvt/PmIY7GwkbE=
=6f0G
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DKIM: Who cares?

2012-10-24 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

As someone who is one of the DKIM authors, I can but roll my eyes and shrug.

It's an interesting, intentional facet of DKIM that any given key being used 
only has to last as long as it takes the email to go from the sender's domain 
to the receiver's.

You could set things up so there's one key per message and take them down as 
the message is used. That's a lot of trouble, but you *could* do that.

However, RFC 4871 says on the subject:

3.3.3.  Key Sizes

   Selecting appropriate key sizes is a trade-off between cost,
   performance, and risk.  Since short RSA keys more easily succumb to
   off-line attacks, signers MUST use RSA keys of at least 1024 bits for
   long-lived keys.  Verifiers MUST be able to validate signatures with
   keys ranging from 512 bits to 2048 bits, and they MAY be able to
   validate signatures with larger keys.  Verifier policies may use the
   length of the signing key as one metric for determining whether a
   signature is acceptable.

   Factors that should influence the key size choice include the
   following:

   o  The practical constraint that large (e.g., 4096 bit) keys may not
  fit within a 512-byte DNS UDP response packet

   o  The security constraint that keys smaller than 1024 bits are
  subject to off-line attacks

   o  Larger keys impose higher CPU costs to verify and sign email

   o  Keys can be replaced on a regular basis, thus their lifetime can
  be relatively short

   o  The security goals of this specification are modest compared to
  typical goals of other systems that employ digital signatures

   See [RFC3766] for further discussion on selecting key sizes.

Note the weasel-words "long-lived." I think that the people caught out in this 
were risking things -- but let's also note that the length of exposure is the 
TTL of the DNS entries.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQiL2lsTedWZOD3gYRAou1AJ0W4HQMn/pfT00nvQcJB+B8MqUVXQCdGL9R
PxLZSoy7Qeax8ABpvdTc214=
=phnF
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Client certificate crypto with a twist

2012-10-10 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Oct 10, 2012, at 6:52 AM, Jonathan Katz wrote:

> Looking at this just from the point of view of client-server authentication, 
> how is this any better than having the website generate a cryptographically 
> strong "password" at sign-up time, and then having the client store it in the 
> password cache of their browser?
> 
> Note that both solutions suffer from the same drawback: it becomes more 
> difficult for a user to log on from different computers.

An excellent point, Jonathan.

I also wonder why there has to be any certification at all?

Right now, web sites store a user name and a representation of a password. 
(Note that a password, a hash of a password, etc. are all representations of 
that password.)

Why not store a representation of a *key* (a hash is a representation of a key) 
and then prove possession of the key? It doesn't need to be certified. I can 
store that key on as many computers as needed via a keychain or something like 
it.

Of course, one could have that key be part of a certificate for the times that 
that is necessary. In the general case, it doesn't need to be certified though. 
If all I'm doing is creating an account on your server, you don't need to 
certify my key. You might want to certify my account in some way (like an email 
round trip, etc.) but why propagate that to the key?

Certification is a very nice hammer. It drives a lot of nails. I don't think 
it's needed here.

I'll also add that SSH authentication by key does what I'm describing. You 
attach a key to the account and then prove possession of the key. There are of 
course, many ways to do this that don't use the SSH mechanism, but SSH follows 
the general sketch.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQdYZJsTedWZOD3gYRAtZoAKDninZlPwGSqBuXZyqwja9m+q5aIgCdH0jc
E2TRwEeFt83Iu0u9NmvF2VM=
=YiNK
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How to safely produce web pages from multiple sources?

2012-08-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 28, 2012, at 6:33 PM, James A. Donald wrote:

> Ѕuppose your web page inсorporates some сontent from another url, a not 
> altogether trusted url.  Let us сall this other url Malloс.  You, the owner 
> of the website and the author of the main part of the web page are Bob, the 
> browser is being viewed by Carol, and you inсorporate сontent from Malloс 
> that you hope is innoсent, but may not be.
> 
> How does Bob make sure his web page сannot have its seсrets leaked, nor сan 
> the сontent that Bob intends to сontrol be сontrolled by Malloс, so that 
> Malloс сannot man-in-the-middle, сannot spy on, nor сhange, the сonversation 
> between Bob and Carol, сannot lead Carol to think Bob said something 
> different from that whiсh he intended to say, nor lead Bob to think that 
> Carol сliсked on something other than that whiсh she сliсked on?

In the abstraсt сase, you сan't.

You сan сanoniсalize Malloс into something that stops many, possibly all 
syntaсtiс attaсks. If you took HTML, for example, and turned all the brokets 
into spaсes, you'd stop any syntaсtiс HTML attaсks. But you've now produсed a 
new doсument that Carol might interpret inсorreсtly.

In many сases, a semantiс attaсk сould be сonstruсted by doing something like 
сreating an HTML сomment that onсe it had its сommentness stripped from it, 
would be meaningful to Carol.

This says nothing about other semantiс attaсks, too, like homographs. We ran 
into this thing with PGP and many ways that people сan play games, like the 
string "РGР". I leave sorting that out as an exerсise for the reader.

Okay, I have no patienсe with that sort of thing, myself. The string has two 
Cyrilliс "ER" сharaсters and one Latin "GEE." I played the same trick with a 
number of other characters in this message, globally replacing the Latin letter 
with the Cyrillic that looks a lot like it -- including in the quote of your 
text. I apologize to anyone who doesn't do Unicode well.

You can, however, solve many, many useful subsets of the general case. If you 
try to solve the general case, let me warn that there lies madness.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: utf-8

wj8DBQFQPYKLsTedWZOD3gYRAmB2AKDtNcrN0nVtPAYxyNSjF8K63JCNSgCfWAAk
2b8uNngpo1Vc29PynzaJhg8=
=cJGM
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] any reason PBKDF2 shouldn't be used for storing hashed passwords?

2012-08-15 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 15, 2012, at 4:50 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

> * PGP Signed by an unknown key
> 
> Any reason PBKDF2 shouldn't be used for (storing) hashed passwords?
> 

My recommendation is that you should use it. It's even got a NIST document, now:

http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf

To be the most rigorous, use PBKDF2-HMAC-SHA[12]. It doesn't matter a lot which 
hash function you're using if you're doing the HMAC version. The major 
difference will be the number of iterations. SHA2 is slower than SHA1, so 
you'll use fewer iterations. SHA512 is faster on a 64-bit processor than 
SHA256, which puts a small wrench in things.

Use lots of iterations. Calibrate them against real time -- enough for 100ms or 
more, for example, rather than a fixed count. If you're worried, then add more 
iterations.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQLDuusTedWZOD3gYRAt0+AKC0jAKZS40IDBdYelX19y5pQ6zS5gCgpYhI
dYokIg8zciE7iY5NrXVWkwc=
=pSLW
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 22, 2012, at 11:20 AM, Samuel Neves wrote:

> 
> Not exactly. If the target is ~80-bit security, ~160-bit elliptic curves are 
> still fine, even for pairing-based crypto. The failure there was the choice 
> of the particular *field* and *curve parameters*. Namely, choosing both the 
> characteristic (3) and the embedding degree (6) to be small left it open to 
> faster attacks.

Yeah, but we're all supposed to retire 80-bit crypto.

I'm well aware of my own lackadaisicalness in this regard (to wit, the 1024-bit 
DSA key that this message is signed with). That doesn't make the point invalid, 
it only means that I am a sinner, too.

I'm interested in knowing what the equivalent values for uprating are, and the 
rationales for them.

If ~1000 bit pairing is equivalent to 80 bits, what's equivalent to 128?

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFP5N6CsTedWZOD3gYRAlBZAKDf1Yl6Z9sw7HY2kZYSJos8QAaa8ACfYFEO
6UmICgYZia5H9rw2b9IVTM8=
=SUPa
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 22, 2012, at 2:01 AM, James A. Donald wrote:

> On 2012-06-22 6:21 PM, James A. Donald wrote:
>>> Is this merely a case where 973 bits is equivalent to ~60 bits symmetric?
> 
> As I, not an authority, understand this result, this result is not "oops, 
> pairing based cryptography is broken"
> 
> It is "oops, pairing based cryptography requires elliptic curves over a 
> slightly larger field than elliptic curve based cryptography does"

Indeed. So kudos to the Fujitsu guys, and we make the curves bigger. Even 77 
bits is really too small for serious work.

Does anyone know what the ratio is for equivalences, either before or after?

The usual rule of thumb is 2x bits for symmetric security equivalence on hashes 
and normal ECC, with integer public keys being 1024 maps to 80 symmetric, 2048 
to 112, and 3K to 128.

What creates the 953 -> 153 relation? Then of course there's the obvious 153 
halved, but do we know at all how we'd compensate for the new result?

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP5LFxsTedWZOD3gYRAi2oAKDTs9aRZVTc2IoFlaKPbEJw9pd6jACeOSqe
WMl+TXGl/i+KHfW9p88dxHA=
=0+9/
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-20 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 20, 2012, at 8:35 AM, Matthew Green wrote:

> I'm definitely /not/ an ECC expert, but this is a pairing-friendly curve, 
> which means it's vulnerable to a type of attack where EC group elements can 
> be mapped into a field (using a bilinear map), then attacked using an 
> efficient field-based solver. (Coppersmith's).
> 
> NIST curves don't have this property. In fact, they're specifically chosen so 
> that there's no efficiently-computable pairing.
> 
> Moreover, it seems that this particular pairing-friendly curve is 
> particularly tractable. The attack they used has an estimated running time of 
> 2^53 steps. While the 'steps' here aren't directly analogous to the 
> operations you'd use to brute-force a symmetric cryptosystem, it gives a 
> rough estimate of the symmetric-equivalent key size.
> 
> (Apologies to any real ECC experts whose work I've mangled here… :)

Thanks, anyway, as things seem to be detail-lite where I'm getting them.

Do we have anyone who can speak authoritatively on this? I am also not at all 
an expert on pairing-friendly curves.

Is this merely a case where 973 bits is equivalent to ~60 bits symmetric? If 
so, what's equivalent to AES-128 and 256? Is there something inherently weak in 
pairing-friendly curves, like there are in p^n curves?

I have no idea what this result *means* and would love to know. 

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFP4jy5sTedWZOD3gYRAoL9AJ9iVVSj1RY3SCLQCo8WJutsRq4IEwCfYUdZ
xzcsltQaPQZELJ0joMs7UjU=
=l3BW
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 19, 2012, at 12:09 AM, Jon Callas wrote:

> * PGP Signed: 06/19/2012 at 12:09:46 AM
> 
> I am reminded of an article my dear old friend, Martin Minow, did in 
> Cryptologia ages ago. He wrote the article I think for the April 1984 issue. 
> It might not have been 1984, but it was definitely April.

1986. Cryptologia, Volume 10, Issue 2, 1986. The article is entitled "NO 
TITLE". The first page is available here:

http://www.tandfonline.com/doi/abs/10.1080/0161-118691860912

but sadly the rest of it is behind a paywall that wants $43 for the issue (or 
the whole volume for $58, such a bargain).

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4CoEsTedWZOD3gYRAouxAKDSMxRISY7BgZ7aLZ8TxCbm2uX+9gCg8T8E
J/rdgBl2nIaHES8X2nWp0QY=
=LZvI
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am reminded of an article my dear old friend, Martin Minow, did in 
Cryptologia ages ago. He wrote the article I think for the April 1984 issue. It 
might not have been 1984, but it was definitely April.

In it, he described a cryptosystem in which you set the key to be the same as 
the plaintext and then XOR them together. There is a two-fold beauty to this. 

First that you have full information-theoretic security on the scheme. It is 
every bit as secure as a one-time pad without the restrictions of a one-time 
pad as to randomness of the keys and so on. 

The second wonderful property is that the ciphertext is compressible. Usually 
cipher text is not compressible, but in this case it is. Moreover, it is 
*maximally* compressible. The ciphertext can be compressed to a single bit and 
the ciphertext length recovered after key distribution.

I think that non-decryptable encryption really needs to cite Minow's pioneering 
work.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4CW6sTedWZOD3gYRAgW8AKCpdVUpa1CpDpn5F6ZB4hezweGa9gCgz/62
m2eb/GnTagRxb6O0ct0a2oQ=
=Gwp3
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 18, 2012, at 4:12 PM, Marsh Ray wrote:

> 
> 150 clocks (Intel's figure) implies 18.75 clocks per byte.
> 

That's not bad at all. It's in the neighborhood of what I remember my DRBG 
running at with AES-NI. Faster, but not by a lot. However, I will getting the 
full 16 bytes out of the AES operation and RDRAND is doing 64 bits at a time, 
right?

> 
> Note that Skein 512 in pure software costs only about 6.25 clocks per byte. 
> Three times faster! If RDRAND were entered in the SHA-3 contest, it would 
> rank in the bottom third of the remaining contestants.
> http://bench.cr.yp.to/results-sha3.html

As much as it warms my heart to hear you say that, it's not a fair comparison. 
A DRBG has to do a lot of other stuff, too. The DRBG is an interesting beast 
and a subject of a whole different conversation.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFP4B3lsTedWZOD3gYRAkegAJ0Z491IAfNVXX3hKOdOghPczZmWMACgztIG
Ym7qE1e/es0m0o+macE+Iv0=
=GJXv
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 9:46 PM, Marsh Ray wrote:

>> This results in the creation of a black-box or component approach.
>> Because of this and perhaps only because of this, block algorithms and
>> hashes have become the staples of crypto work. Public key crypto and
>> HMACs less so. Anything crazier isn't worth discussing.
> 
> I don't get it. Why can't we have effective test vectors for HMACs and public 
> key algorithms?
> 

We do. FIPS 140 CAVS tests are a damned good set of vectors. The complaints I 
have about them is that there are too many and some things that are of 
questionable benefit (the so-called "Monte Carlo" tests, for one) rather than 
that there are too few of them.

There are even test vectors for the DRBGs. They give you entropy inputs and 
everything and look at your output.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4Bn9sTedWZOD3gYRAkXpAJ9p2kkFBYOgVsiIhjgFlXOKCQFRmACgluQh
74tRuchgKXk60pBrlmhr3zE=
=qsCQ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas

On Jun 18, 2012, at 9:03 PM, Matthew Green wrote:

> On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:
> 
>> Reviewers don't want a review published that shows they gave a pass on a 
>> crap system. Producing a crap product hurts business more than any thing in 
>> the world. Reviews are products. If a professional organization gives a pass 
>> on something that turned out to be bad, it can (and has) destroyed the 
>> organization.
> 
> 
> I would really love to hear some examples from the security world. 
> 
> I'm not being skeptical: I really would like to know if any professional 
> security evaluation firm has suffered meaningful, lasting harm as a result of 
> having approved a product that was later broken.
> 
> I can think of several /counterexamples/, a few in particular from the 
> satellite TV world. But not the reverse.
> 
> Anyone?

The canonical example I was thinking of was Arthur Anderson, which doesn't meet 
your definition, I'm sure.

But we'll never get to requiring security reviews if we don't start off seeing 
them as desirable.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 11:15 AM, Jack Lloyd wrote:

> On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
>> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
>> 
>>> The fact that something occurs routinely doesn't actually make it a good 
>>> idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
>>> 
>>> This is CRI, so I'm fairly confident nobody is cutting corners. But that 
>>> doesn't mean the practice is a good one. 
>> 
>> I don't understand.
>> 
>> A company makes a cryptographic widget that is inherently hard to
>> test or validate. They hire a respected outside firm to do a
>> review. What's wrong with that? I recommend that everyone do
>> that.
> 
> When the vendor of the product is paying for the review, _especially_
> when the main point of the review is that it be publicly released, the
> incentives are all pointed away from looking too hard at the
> product. The vendor wants a good review to tout, and the reviewer
> wants to get paid (and wants repeat business).

Not precisely.

Reviewers don't want a review published that shows they gave a pass on a crap 
system. Producing a crap product hurts business more than any thing in the 
world. Reviews are products. If a professional organization gives a pass on 
something that turned out to be bad, it can (and has) destroyed the 
organization.

The reviewer is actually in a win-win situation. No matter what the result is, 
they win. But ironically, or perhaps perversely, a bad review is better for 
them than a good review. The reviewer gains far more from a bad review.

Any positive review is not only lacking in the titillation that comes from 
slagging something, but you can't prove something is secure. When you give a 
good review, you lay the groundwork for the next people to come along and find 
something you missed -- and I guarantee it, you missed something. There's no 
system in the world with zero bugs.

Of course there are perverse incentives in reviews. That's why when you read 
*any* review, you have to have your brain turned on and see past the marketing 
hype and get to the substance. Ignore the sizzle, look at the steak.

> 
> I have seen cases where a FIPS 140 review found serious issues, and
> when informed the vendor kicked and screamed and threatened to take
> their business elsewhere if the problem did not 'go away'. In the
> cases I am aware of, the vendor was told to suck it and fix their
> product, but I would not be so certain that there haven't been at
> least a few cases where the reviewer decided to let something slide. I
> would also imagine in some of these cases the reviewer lost business
> when the vendor moved to a more compliant (or simply less careful)
> FIPS evaluator for future reviews.

I agree with you completely, but that's somewhere between irrelevant and a 
straw man.

FIPS 140 is exasperating because of the way it is bi-modal in many, many 
things. NIST themselves are cranky about calling it a "validation" as opposed 
to a "certification" because they recognize such problems themselves.

However, this paper is not a FIPS 140 evaluation. Anything one can say positive 
or negative about FIPS 140 is at best tangential to this paper. I just searched 
the paper for the string "FIPS" and there are six occurrences of that word in 
the paper. One reference discusses how a bum RNG can blow up DSA/ECDSA (FIPS 
186). The other five are in this paragraph:

In additional to the operational modes, the RNG supports a FIPS
mode, which can be enabled and disabled independently of the
operational modes. FIPS mode sets additional restrictions on how
the RNG operates and can be configured, and is intended to
facilitate FIPS-140 certification. In first generation parts, FIPS
mode and the XOR circuit will be disabled. Later parts will have
FIPS mode enabled. CRI does not believe that these differences in
configuration materially impact the security of the RNG. (See
Section 3.2.2 for details.)

So while we can have a bitch-fest about FIPS-140 (and I have, can, do, and will 
bitch about it), it's orthogonal to the discussion.

It appears that you're suggesting the syllogism:

FIPS 140 demonstrate security well.
This RNG has FIPS 140
Therefore, this RNG is not secure.

Or perhaps a conclusion of "Therefore, this paper does not demonstrate the 
security of the RNG" which is less provocative.

What they're actually saying is that they don't think that FIPSing the RNG will 
"materially impact the security of the RNG" -- which if you think about it, is 
pretty faint praise.


> 
> I am not in any way suggesting that CRI would hide we

Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:

> The fact that something occurs routinely doesn't actually make it a good 
> idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
> 
> This is CRI, so I'm fairly confident nobody is cutting corners. But that 
> doesn't mean the practice is a good one. 

I don't understand.

A company makes a cryptographic widget that is inherently hard to test or 
validate. They hire a respected outside firm to do a review. What's wrong with 
that? I recommend that everyone do that. Un-reviewed crypto is a bane.

Is it the fact that they released their results that bothers you? Or perhaps 
that there may have been problems that CRI found that got fixed?

These also all sound like good things to me.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
8e6Ll1xBvXe8IojvRDvksXw=
=jAzX
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On May 30, 2012, at 12:59 PM, Nico Williams wrote:

> 
> Are you saying that PBKDFs are just so much cargo cult now?

No. PBKDF2 is what I suggest, actually. C.F. my entirely too long missive to 
Maarten that I just sent.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPxxfxsTedWZOD3gYRAiUHAJ4wLHhlM4220R3nOryUVitaC83ShACg5yjk
MjpQdcrhZywKmrWdPgjHoG0=
=wYQ9
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On May 30, 2012, at 4:28 AM, Maarten Billemont wrote:

> If I understand your point correctly, you're telling me that while scrypt 
> might delay brute-force attacks on a user's master password, it's not 
> terribly useful a defense against someone building a rainbow table.  
> Furthermore, you're of the opinion that the delay that scrypt introduces 
> isn't very valuable and I should just simplify the solution with a hash 
> function that's better trusted and more reliable.
> 
> Tests on my local machine (a MacBook Pro) indicate that scrypt can generate 
> 10 hashes per second with its current configuration while SHA-1 can generate 
> about 1570733.  This doesn't quite seem like a "trivial" delay, assuming 
> rainbow tables are off the... table.  Though I certainly wish to see and 
> understand your point of view.

My real advice, as in what I would do (and have done) is to run PBKDF2 with 
something like SHA1 or SHA-256 HMAC and an absurd number of iterations, enough 
to take one to two seconds on your MBP, which would be longer on ARM. There is 
a good reason to pick SHA1 here over SHA256 and that is that the time 
differential will be more predictable.

Let me digress for a moment. It's a truism of security that complexity is the 
enemy of security and simple things are more secure. We just had such a 
discussion on this list in the last week. However, saying that is kinda like 
saying that you should love your mother because she loves you. Each of those is 
something it's impossible to disagree with -- how can you be against simplicity 
or loving your mother? But neither of them is actionable. The truism about my 
mother doesn't say how much I should spend on her birthday present. It doesn't 
tell me if I should come home early from work so I can spend more time with her 
or spend more time at work so I can be successful and she'll be proud of me. 
They are each true, but each meaningless, since I can use the principle to 
defend either A or ~A.

Having said that, the sort of simplicity that I strive for and that I like is 
something that I call understandability. It isn't code size, or anything like 
that, it's how well I can not only understand it, but intuit it. It is similar 
to the mathematical concept of elegance in that it is an aesthetic principle. 
Like loving my mother, I might do something today and its opposite tomorrow 
because it is aesthetic within its context.

I've read the scrypt paper maybe a half-dozen times. As a matter of fact, I 
just went and read it again while writing this note. Each time I read it, I nod 
as I follow along and when I get to the end of the paper, I'm not sure I 
understand it any more. I remain unconvinced. I think it complex. I think it 
inelegant. It fails my understandability test. This is not rational, and I know 
that; this is why I said that this is something that gentlepersons can disagree 
on. I don't think that because *I* don't like it that *you* shouldn't like it. 
I also mean no disrespect to Colin Percival, who is jaw-droppingly brilliant. I 
read his paper and say "Wow" even as I remain unconvinced.

I also start to poke at it in some odd ways, mentally. I have a friend who 
builds custom supercomputers. These things have thousands of CPUs and tens of 
terabytes of memory. Would scrypt hold up to its claims in such an environment? 
I don't know, and my eyebrow is raised. 

Let us suppose that someone were to spend billions of dollars making a 
supercomputing site out in the desert somewhere. Would scrypt stand up to the 
analytic creativity that they show? I don't know. Moreover, I am irrationally 
skeptical; I believe that it would not, and I have no rational reason for it.

Lastly, I fixate on Table 1 of the scrypt paper, on page 14. Estimated cost of 
hardware to crack a password in 1 year. In the middle row-section we see an 
estimate for a 10 character password. It would take (according to the paper) 
$160M to make a computer that would break 100ms of PBKDF2-HMAC-SHA256. The 
comparison is against 64ms of scrypt and a cost estimate of $43B. In the next 
row-section down, it gives a comparison of 5s of PBKDF2 for $8.3B versus 3.8s 
of scrypt for $175T.

PBKDF2 is understandable. It's simple. In my head, I can reach into my mental 
box of cryptographic Lego and pull out a couple SHA blocks, snap them to an 
HMAC crossbar, and then wrap the thing in a PBKDF2 loop and see the whole thing 
in my head. It's understandable. I *believe* Colin Percival's number that 100ms 
of iteration will cost $160M (assuming 2009 hardware costs, at standard 
temperature and pressure) to break, and I think "Wow. That's good enough." And 
if it isn't -- we can up it to 200ms, and handwave out to $300M hardware cost. 
I can also mentally adjust those against using GPUs and other accelerators 
because it's understandable.

In contrast, I can't get a mental model of scrypt. It is mentally complex and 
because of that comp

Re: [cryptography] Master Password

2012-05-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Your algorithm is basically okay, but there are a couple of errors you've made, 
things you and I will disagree over, and one flaw that I consider to wreck the 
whole thing. But all of the problems are correctable, easily. If I have not 
understood something, or read it too quickly and gotten confused, I apologize 
in advance.

Let me walk through a reduction of your system:

(1) You take the master password and run it through a 512-bit hash function, 
producing master binary secret.

You pick scrypt for your hash function, because you think burning time and 
space adds to security. I do not. This is a place where gentlepersons can 
disagree, and I really don't expect to convince you that SHA-512 or Skein would 
be better options. I'm convinced that I know why you're doing it, and it would 
be a waste of both our times to go further. We just disagree.

At the end of it, it hardly matters because if an attacker wishes to construct 
a rainbow table, the correct way to do it is to assemble a list of likely 
passwords and just go from there. It will take longer if they use scrypt than 
with a real hash function, but once it's done it is done. They have the rainbow 
table.

This isn't a flaw, it just is. The goal of your system requires that you have a 
master secret. But security-wise, there's no win here other than burning some 
cycles in a way that the attacker can trivially replicate.

Security-wise, I'm quite certain that scrypt isn't nearly as secure as any real 
hash function you'd pick, but I'm just whining. We know that the security is 
password. If they pick "puppies" as their password, it really doesn't matter 
what hash function you run it through. Almost certainly, there is not enough 
security in their password to make it a difference what function you picked.

Let's call the parameters P for password and M for the master key.

(2) You take M and construct site-specific keys. We'll call the site name N, 
your counter C, and the site keys K_s.

You compute a given site-specific key, K_s, with:

K_s = SHA1(S + M + C), where "+" is the function that concatenates a null byte 
and then the second string.

Strictly speaking, you really ought to do it in the order K + C + S, because 
that's more collision-resistant. It's good practice when computing a keyed hash 
to hash the key first. In reality, it probably doesn't matter, but it *does* 
save you lots of debates with people like me.

You also want to hash in the length of S, too, because that's also more 
collision-resistent.

So you really want it to be K_s = SHA1(M + C + S.length + S), but that's the 
only real security problems I can see. The ordering is a nit, and omitting the 
length is only a problem if the hash function is broken. Speaking of which, why 
not use a non-broken hash function, like SHA256, or SHA512 or SHA512/160, if 
the output size matters to you? Given that you're using scrypt, why not use a 
better hash function, even if it is slower? But that's also a nit.

The real problem you have, however, is in the counter. First of all a counter 
is not a salt. A salt is an arbitrary non-security parameter. But arbitrary 
means random, just not secret. A counter is a counter.

The counter has two problems to it. One is that it doesn't add to the security 
of the system. The other it makes system utterly useless. 

An attacker can easily brute force through the site keys by just running a 
counter. That's why I say it doesn't add to the security. Even if you used 
scrypt here, too, it wouldn't matter much. It's still easy to brute force.

However, this completely ruins the system for the end user. The end user can't 
just remember their master password and the site name. They have to remember 
what *order* they did it in too, which ruins everything. You can't sync this 
across devices, you have to keep track of orders, and so on. You need to remove 
the counter.

I understand why you did it. You did it because that way two people with the 
same master password on the same site aren't going to have the same password -- 
which would give away the master password to the other one, as well as leaking 
to an attacker that you picked a lousy password, even if they don't immediately 
know what it is (cue the rainbow tables). Ironically, this is kinda like the 
recent RSA/GCD thing in that your security oops can be made into a disaster if 
someone else makes the same oops.

You can fix this one by substituting the person's site username for the 
counter. In this revision, you have K_s = HASH(M + U.length + U + S.length + 
S). That's a much better construction.

(3) You run the site key, K, through some pretty printer. I really didn't read 
this and really don't care. It doesn't matter to me. TL;DR.

All in all, it's cute. I like it enough to write this note. I advise that you:

* Pick a real hash function or two. We can debate scrypt, but you can do better 
than SHA1. Even a second scrypt is better tha

Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-25 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

My money would be on a combination of traffic analysis and targeted malware. We 
know that the Germans have been pioneering using targeted malware against 
Skype. Once you've done that, you can pick apart anything else. Just a simple 
matter of coding.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPv800sTedWZOD3gYRAomcAJ4uNmrVjVFy3TzjDaqxqE/fm8xPvgCcCV+a
5F0VUjuKacwHqQEdCzQv//g=
=qVnU
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is this as ominous as it sounds like? (It SOUNDS ominous as Hell - but maybe it isn't)

2012-05-07 Thread Jon Callas

On May 7, 2012, at 2:36 PM, The Fungi wrote:

> On 2012-05-07 12:53:14 -0400 (-0400), Randall Webmail wrote:
> [...]
>> The Internet Kill Switch; With Global Wiretapping Capability?
> [...]
> 
> I consider myself a paranoid nut, but that's a little far fetched
> even for me. I'd take the time to write up everything that's off
> base with it (from the perspective of working for an ISP and DNS
> host), but it looks like someone already beat me to the punch:
> 
>   
> http://metabunk.org/threads/561-Debunked-Markmonitor-com-The-Internet-Kill-Switch-Wiretapping
> 
> A fun flight of fancy nevertheless.

Indeed, and that's a great debunking.

MarkMonitor is a registrar, just like Network Solutions, GoDaddy, and countless 
others. They offer other services just like all of those other registrars.

This isn't paranoid, it's just ignorant. A truly *paranoid* person would think 
that because this is a big whoop-de-do, that it's actually a viral marketing 
campaign by MarkMonitor to publicize a number of their big-name clients and get 
name recognition. I mean really, the gist of this is:

ZOMG! In the last few months Google, MSN/Hotmail, Yahoo, Apple, Wikimedia, 
Nokia, and Ubuntu have all become customers of MarkMonitor! See a pattern? (Cue 
dramatic chords, Duh-Dunh-DUUUH!) 

The paranoid person would note that the overblown hysteria is going to be 
debunked, and we're left with the customer list of big name companies that 
wouldn't let their name be used as references, no how, no way. 

This is far more innovative than a press release, gets MarkMonitor interesting 
press, and they can just deny it away.

This is a much better conspiracy theory. The other people are pikers, just 
pikers, I tell you.

Jon






PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Forensic snoops: It doesn't take a Genius to break into an iPhone

2012-04-10 Thread Jon Callas

On Apr 10, 2012, at 10:32 AM, Natanael wrote:

> Just FYI, there's been claims that these guys faked it. But on the other 
> hand, there ARE other tools that can extract data from iPhones so you can 
> bruteforce the encryption later.
> 

I'm pretty certain they faked it. The question is how they faked it. They may 
have faked it in a quasi-defensible way.

It takes ~1000 seconds to brute force a four-digit PIN, because the hardware 
calibrates each iteration to ~100ms (and it must be done on the device itself, 
because there's a hardware key that's part of the calculation, and if you don't 
want to destroy the device, you do it on the device. Thats 16 2/3 minutes.

If you then say that well, you can get one on average in 8 1/3 minutes, that 
has merit, but we've definitely wandered into marketing. If you note that some 
large percentage of PINs start with a zero or one, that average pulls down, 
particularly since you'll do everything starting with a one in ~100 seconds, 
and really, part of the human factors of pincodes is that a frighteningly large 
number of them are under 1231. 

If you're selling a forensic toolkit, it is not untrue that you could do it in 
a few minutes on average. It's not what I'd call responsible, though. It 
implies that the best pincode is  or perhaps 9989 (no triple-repeated 
digit). :-)

Jon




PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Anyone seen this CA before?

2012-03-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 31, 2012, at 7:38 PM, Marsh Ray wrote:

> 
> Has anyone seen this CA before?
> 
> Sounds like an interesting business model, even if the site design looks a 
> bit anachronistic.
> 
> http://print-a-cert.com/
> 

That's hilarious! I love it! But I see some security problems:

* The host name and organization name are not centered properly. This permits 
easier forgery by rogue actors and I'm sure is in violation of Baseline 
Requirements.

* SSL 3.0 ought to be upgraded to TLS 1.1. There's plenty of reason to go to 
TLS 1.1, and plenty to stick there.

* The font used for the key id does not differentiate well between 'I' 
(capital-eye) and 'l' (lowercase-ell). Learn the lesson of Bob Marley and don't 
shoot the serif.

* There is no expiration date on the certificate. You'll hate your CRLs if you 
do that.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPd+HksTedWZOD3gYRAoRGAJ9lQAIVQo45yvLWNB9KFXs2wB+dsACcCsAM
1JY9Kvh2k5FMNBDIf/sNkFw=
=KO1+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key escrow 2012

2012-03-29 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 29, 2012, at 2:48 PM, mhey...@gmail.com wrote:

> On Tue, Mar 27, 2012 at 1:17 PM, Nico Williams  wrote:
>> On Tue, Mar 27, 2012 at 5:18 AM, Darren J Moffat
>>> 
>>> For example an escrow system for ensuring you can decrypt data written by
>>> one of your employees on your companies devices when the employee forgets or
>>> looses their key material.
>> 
>> Well, the context was specifically the U.S. government wanting key
>> escrow.
>> 
> Hmm - these are not mutually exclusive.
> 
> Back in the mid to late 90s, the last time the U.S. government
> required key escrow for international commerce with larger key sizes,
> they allowed key escrow systems that were controlled completely by the
> company. Specifically, they allowed Trusted Information System's
> RecoverKey product (I worked on this one, still have the shirt, and am
> not aware of any other similar products available at the time - PGP's
> came later and was more onerous to use).
> 
> RecoverKey simply wrapped a session key in a corporate public key
> appended to the same session key wrapped with the user's public key.
> If the U.S. Government wanted access to the data, the only thing they
> got was the session key after supplying the key blob and a warrant to
> the corporation in question. The U.S. government even allowed us to
> sell RecoverKey internationally to corporations that kept their
> RecoverKey data recovery centers offshore but agreed to keep them in a
> friendly country.

I'd have to disagree with you on much of that.

The US Government never required key escrow for international commerce. 
Encrypted data was never restricted, what was restricted was the export of 
software etc. If you were of a mind where you thought that the only way to get 
cryptographic software was from the US, then you'd think this might be 
something like effective. In reality, the idea was absurd from the get-go 
because encrypted data was never restricted.

The people who wanted to push key escrow never had a good way to explain to 
anyone why they'd want it. They never had a good carrot, either, for it. At one 
point, they tried to sugar-coat it by offering fast-tracks on export for it, 
but Commerce granted export easily. Furthermore, Commerce's own rules 
progressed so fast with so many exemptions that it was all obviated before it 
could be developed.

Amusingly, I ended up having TIS's RecoverKey under my bailiwick because 
Network Associates bought PGPi and then TIS. The revenues from it were so small 
that I don't think they even covered marketing material like that shirt you 
had. In a very real sense, it didn't exist as anything more than a 
proof-of-concept that proved the concept was silly.

Also, there wasn't a PGP system. The PGP "additional decryption key" is really 
what we'd call a "data leak prevention" hook today, but that term didn't exist 
then. Certainly, lots of cypherpunks called it that at the time, but the 
government types who were talking up the concept blasted it as merely a way to 
mock (using that very word) the concept.

Jon





-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPdOR+sTedWZOD3gYRAtc6AKD/GlvCO3/cs+xuaPTz5I0sqjfUzwCdGcw2
4PlzXeIu0dK9EqfgDQBfpLI=
=GfnU
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-25 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 25, 2012, at 1:22 PM, coderman wrote:

> now they pay to side step crypto entirely:
> 
> iOS up to $250,000
> Chrome or IE up to $200,000
> Firefox or Safari up to $150,000
> Windows up to $120,000
> MS Word up to $100,000
> Flash or Java up to $100,000
> Android up to $60,000
> OSX up to $50,000
> 
> via 
> http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/
> 
> plenty of weak links between you and privacy...

This is precisely the point I've made: the budget way to break crypto is to buy 
a zero-day. And if you're going to build a huge computer center, you'd be 
better off building fuzzers than key crackers.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFPb4NssTedWZOD3gYRAijMAKDNSNKcPYXxUZX2ekzFusz0cEEHTgCgqi8x
lDqmYv4yOLL0C7hc+RDrpVI=
=V0YJ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA Moduli (NetLock Minositett Kozjegyzoi Certificate)

2012-03-23 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2012, at 6:39 AM, Peter Gutmann wrote:

> Jon Callas  writes:
>> On Mar 23, 2012, at 6:03 AM, Peter Gutmann wrote:
>>> Jeffrey Walton  writes:
>>>> Is there any benefit to using an exponent that factors? I always thought 
>>>> low
>>>> hamming weights and primality were the desired attributes for public
>>>> exponents. And I'm not sure about primality.
>>> 
>>> Seeing a CA put a key like this in a cert is a bit like walking down the
>>> street and noticing someone coming towards you wearing their underpants on
>>> their head, there's nothing inherently bad about this but you do tend to 
>>> want
>>> to cross the street to make sure that you avoid them.
>> 
>> But Peter, CAs don't *precisely* put keys into certs. CAs certify a key that
>> the key creator wants to have in their cert.
> 
> This is a self-signed cert from the CA, so the key creator was the CA.

So it's like issuing yourself an Artistic License card with a color printer and 
laminator. :-) Good for lots of laughs.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPbIAAsTedWZOD3gYRAo4KAKDuG0OgEg81mxGUJDGlYp5OzLMI/gCgkRRq
/G3T3NLS/8k1L4njuxMJMd0=
=tHSy
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA Moduli (NetLock Minositett Kozjegyzoi Certificate)

2012-03-23 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2012, at 6:03 AM, Peter Gutmann wrote:

> Jeffrey Walton  writes:
> 
>> Is there any benefit to using an exponent that factors? I always thought low
>> hamming weights and primality were the desired attributes for public
>> exponents. And I'm not sure about primality.
> 
> Seeing a CA put a key like this in a cert is a bit like walking down the
> street and noticing someone coming towards you wearing their underpants on
> their head, there's nothing inherently bad about this but you do tend to want
> to cross the street to make sure that you avoid them.

But Peter, CAs don't *precisely* put keys into certs. CAs certify a key that 
the key creator wants to have in their cert.

It's far more like someone coming into the DMV with a colander on their head 
and saying they're a Pastafarian and this is their religious headdress. If you 
refuse to let them wear the colander it's likely worse than if you do and 
really, it's their problem at the end of the day.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPbHp9sTedWZOD3gYRAn+jAKCpMrt8HeaY7SueljFDSFZjlvaVnQCeOW0J
FEHY8ekvvkN3bCWYrONi7Mw=
=Apj2
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)

2012-03-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 22, 2012, at 10:02 AM, Marsh Ray wrote:

> 
> 
> Or it could be complete BS.
> 

"The race is not always to the swift, nor the battle to the strong, but that's 
the way to bet."
  -- Damon Runyon.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPa2cfsTedWZOD3gYRAvxtAJ9wVuVfkJVV3cn+NpTpN+8sxxUEIwCeKEvo
4a7DfTy0flJyn96s49GBcyM=
=re6+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 18, 2012, at 6:38 PM, Randall Webmail wrote:

> From: "ianG" 
> 
>> ... So after a lot of colour, it is not clear if they can break AES. 
>> Yet.  OK.  But that is their plan.  And they think they can do it, 
>> within their foreseeable future.  Maybe soon.  Or maybe they can, and 
>> they've managed to get their own agency to at least believe it's in the 
>> future, not now.  Or maybe they can at 128, but not larger?
> 
> I suppose we've all seen the "proofs" that brute-forcing PGP would take a 
> supercomputer the size of the planet longer than the age of the universe to 
> accomplish.   Was the math faulty in those proofs, or is it true, and the NSA 
> is just empire-building?

They aren't "proofs" in the sense of rigorous mathematics, but they're 
arguments.

There's nothing wrong with the math, but they have certain assumptions. If they 
know something that we don't -- for example, presume they've solved the 
algebraic equation that is AES, then that would lead to a different set of math.

Frankly, I think that Jonathan Thornburg has a better line on it -- it's much 
more efficient to develop a theory of how to break passphrases. I can much 
better see how a large computing engine could help with that.

Let me handwave a bit. Suppose using scrapings from social networks, web 
surfing, etc., you come up with a model of your opponent and can compute in a 
week the 2^30 most likely passphrases they'd use. You know have a much simpler 
task now, one that should take anything from minutes to a couple weeks to do.

Also note that Alice is talking to Bob, you can likely get the message by 
attacking either Alice or Bob.

But really, I wouldn't do the crypto at all. I would just go for traffic 
analysis. And huge supercomputers would help with that. Good traffic analysis 
makes crypto irrelevant.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPZ0IPsTedWZOD3gYRAu+AAKCFt+37HykwnA2RX4UlkWbH8nAf8gCg3pp1
P5uo+X/fMXp0oIhNtI0ct3s=
=0Qv9
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Number of hash function preimages

2012-03-11 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 10, 2012, at 5:24 PM, Eitan Adler wrote:

> On Sat, Mar 10, 2012 at 7:28 PM, Jon Callas  wrote:
>>> 
>>> 2) Is it known if every (valid) digest has always more than one
>>> preimage? To state it otherwise: Does a collision exist for every
>>> message? (i.e., is the set h^{-1}(x) larger than 1 for every x in the
>>> image of h?).
>>> 
>> 
>> Sure, by the pigeonhole principle. Consider a hash with 256-bit (32-byte) 
>> output. For messages larger than 32 bytes, by the pigeonhole principle, 
>> there must be collisions. For messages smaller than 32 bytes, they might 
>> collide on themselves but don't have to, but there will be some large 
>> message that collides with it.
> 
> I think you are misunderstanding the question (or at the very least I
> am). The pigeonhole principle only shows that there exist collisions
> not that collisions exist for every element in the codomain.
> Think about the function over the natural numbers:
> f(x) = {
> 1, if x = 0
> 2, if x > 0
> 3, if x < 0
> }
> 
> While there exist collisions within N it isn't true that every element
> in the co-domain has a collision.

You're right, I misunderstood the question.

Your example gets to what I was saying about a lot being there in the ellipsis 
on hash functions. 

Let's take your function -- which is a hash function, it's just not generally 
useful -- and if we define its codomain to be [1..3], then yes. But if its 
codomain is [0..3] (i.e. it's a two-bit hash function), then no because it 
never returns a zero. 

Its my intuition that a hash function that's made up of a block cipher and a 
chaining mode is going to do that when operating on natural sizes. For example, 
I expect that Skein512-512 is both surjective and well-behaved on the 
co-domain. It's an ARX block cipher with simple chaining between the blocks. 

However, Skein512-1024 (Skein512 with 1024 bit output) is obviously not 
surjective (512 bits of state yield 1024 bits of output), but I'd expect the 
output codomain to be evenly covered. Also note that not only is it a fine 
output for an RSA-1024 signature, but arguably better than a smaller output.

For smaller output of an unnatural size (e.g. 511 bits), I'd also expect it to 
cover the codomain, and I think it would have to.

I think you'd have to look at other constructions on a case-by-case basis. If 
we look again at my trivial modification of a hash function that makes it not 
return zero but a one instead, it's not surjective, it doesn't evenly cover its 
codomain, and yet for any practical purpose it's a fine hash function. 

For some purposes, it's even more secure than the original. Consider using it 
as a KDF for a cipher for which zero is a weak key (like DES). By not returning 
a weak key, it's more secure than the base function. That's interesting in that 
a flaw in it being an ideal hash function makes it actually superior as a KDF. 

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPXGlXsTedWZOD3gYRAq3+AJwK2l3SNm84mvjdqvAzZV2+bWbmpQCgtsfc
SHd+g57nXlOylLOLUsekgCQ=
=3rTZ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Number of hash function preimages

2012-03-10 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 9, 2012, at 3:25 AM, Florian Weingarten wrote:

> Hello list,
> 
> first, excuse me if my questions are obvious (or irrelevant).

No, they're interesting and subtle.

> 
> I am interested in the following questions. Let h be a cryptographic
> hash function (let's say SHA1, SHA256, MD5, ...).

There's a lot to put in that ellipsis. 

> 
> 1) Is h known to be surjective (or known to not be surjective)? (i.e.,
> is the set h^{-1}(x) non-empty for every x in the codomain of h?)

No. I would bet that the standard ones are all surjective, but I don't know 
that it's ever been demonstrated for any given hash function. The main property 
we want from a hash function is that is is one-way, and demonstrating that a 
one-way function is or isn't surjective flirts with that, at least. Some will 
be easy (modulo is one way and surjective, for example), others will be harder. 
When you add in other desirable hash function properties such as being 
reasonably collision-free, it becomes harder to show.

However, if you show that a hash function is a combination of surjective 
functions that all preserve surjectivity, I think it's an easy proof.

All the ones that use a block cipher and a chaining mode are likely easy to 
prove. 

> 
> 2) Is it known if every (valid) digest has always more than one
> preimage? To state it otherwise: Does a collision exist for every
> message? (i.e., is the set h^{-1}(x) larger than 1 for every x in the
> image of h?).
> 

Sure, by the pigeonhole principle. Consider a hash with 256-bit (32-byte) 
output. For messages larger than 32 bytes, by the pigeonhole principle, there 
must be collisions. For messages smaller than 32 bytes, they might collide on 
themselves but don't have to, but there will be some large message that 
collides with it.

> 3) Are there any cryptographic protocols or other applications where the
> answers to 1) or 2) are actually relevant?

Very likely not.

Let's construct a trivial non-surjective hash function. Start with H, and 
construct H' that for any message that produces a hash of 0, we emit 1 instead. 
It's therefore not surjective since it can't emit a zero. 

It isn't a *useful* non-surjectivity because we don't usefully know a preimage 
of a zero or a one. 

But now let's construct H'' that emits H(M2) when calculating H(M1). This is 
just like H', but with different constants. The difference here is that we have 
artificially created a collision between M1 and M2 instead of a preimage of 0 
and a preimage of 1, which we don't know in advance. Is this a useful 
collision? That's a philosophical question. I'd say no, myself, but I'd 
understand why someone said yes, I'd merely disagree with them.

That's why I say very likely not, instead of just no.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPW/HEsTedWZOD3gYRAhvWAJ4rL6Zxp9eCUpxqDEYPQTLxKQu0VwCeJqHG
IVoDJYQIMASPi03Hl19LxXE=
=68//
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-26 Thread Jon Callas

On Feb 25, 2012, at 6:35 PM, James A. Donald wrote:

> Jon Callas  writes:
> > > I've spoken to law enforcement and border control people
> > > in a country that is not the US, who told me that yeah,
> > > they know all about TrueCrypt and their assumption is that
> > > *everyone* who has TrueCrypt has a hidden volume and if
> > > they find TrueCrypt they just get straight to getting the
> > > second password. They said, "We know about that trick, and
> > > we're not stupid."
> 
> They may assume that - but they cannot prove it.

You're assuming that they operate with the same security model that you do.

Your security model presupposes US law, to start with. I can see that in the 
glib comment asking if I'd ever heard of "innocent until proven guilty" -- 
which is a US principle. It is one that I not only have heard of, but think is 
is pretty darn good idea, too!

Nonetheless, it does not exist everywhere in the world, and I said this was not 
the US. In fact the very reason I said it wasn't the US was because I wanted to 
point out that objections to the story based upon US law are irrelevant. 
Moreover, innocent until proven guilty is interpreted differently depending on 
what sort of case there is. The term *proven* is context-dependent. There are 
different ways they prove, different burdens of proof. "Beyond reasonable 
doubt" and "clear and convincing evidence" are two used in criminal cases in 
the US. "Preponderance of evidence" is usually used in civil cases.

None of these are "plausible deniability." As I said before, this is a term of 
spycraft and statecraft. Usually it's used to describe how a powerful entity 
like a nation state can defend itself against attacks by less-powerful 
entities. There are forms of torture that are popular because they leave no 
marks on the victim and therefore give the state plausible deniability. 
Bureaucracies also use this technique to spread blame or leave the blame with 
some other person. 

In a number of cases involving spectacularly failed companies, the CEO has 
tried to stick someone else with the blame through plausible denial. Or perhaps 
the family and associates of a fraudster use a form of plausible denial to 
avoid conviction or trial. (I am not saying that using plausible means you're 
guilty -- it only means you don't have a better defense.) It works sometimes 
and doesn't work others. It didn't work for Bernie Ebbers, for example. 
Plausible denial combined with a lack of evidence works really well, but it's 
not a legal principle at all.

Most people who use the term "plausible denial," particularly us crypto people, 
would be better served to say "reasonable doubt." It's a better marketing term 
at the very least.

But anyway, back to deniable encryption and what is a language-theoretic issue.

If your security model includes technical issues and policy issues, but your 
attacker has different policies, then your security might fail for 
language-theoretic reasons.

To a border control person (and that's who I was talking about), Truecrypt is 
the same thing as a suitcase with a false bottom. Technically, we'd say that it 
is a container that (assuming it works correctly) *might* have a secret 
compartment and that one that does have secret compartment is 
information-theoretcially indistinguishable from one that has a secret 
compartment. But if you read the previous sentence to a border control person, 
they might hear, "...it is a container ... that ... has a secret compartment." 

The difference is policy, not technical. If their security model includes the 
policy that there's no reason to have a suitcase with a false bottom except to 
put something in it, then how you make a denial becomes everything.

If your denial is "don't be ridiculous, I *know* you guys can spot hidden 
volumes and that's why I'd never use one -- I use it because I'm cheap" then 
you're doing well. If your denial is, "you can't prove there's a hidden volume 
there" then you're not doing so well.

My point is that there are security models out there that know about hidden 
volumes and have their own defenses against them. I used the word "defenses" 
intentionally. They are border control people. Their model considers a hidden 
volume to be an attack, not a defense. They have developed their own defenses 
against smuggling that take hidden volumes into account.

> Evidently in the case of
> http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf They
> were totally unable to get information out of John Doe
> 
> For the entire case turned on the fact that John Doe never
> admitted the existence of the hidden drive, and forensics were
> ent

Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-26 Thread Jon Callas

On Feb 25, 2012, at 3:18 PM, Kevin W. Wall wrote:

> On Sat, Feb 25, 2012 at 2:50 AM, Jon Callas  wrote:
> 
> [snip]
> 
>> But to get to the specifics here, I've spoken to law enforcement and
>> border control people in a country that is not the US, who told me
>> that yeah, they know all about TrueCrypt and their assumption is
>> that *everyone* who has TrueCrypt has a hidden volume and if they
>> find TrueCrypt they just get straight to getting the second password.
>> They said, "We know about that trick, and we're not stupid."
> 
> Well, they'd be wrong with that assumption then.

Only from your point of view. From their point of view, the user is the one 
with wrong assumptions.

Remember what I said -- they're law enforcement and border control. In their 
world, Truecrypt is the same thing as a suitcase with a hidden compartment. 
When someone crosses a border (or they get to perform a search), hidden 
compartments aren't exempt. They get to search them. 

Also to them, Truecrypt is a suitcase that advertises a hidden compartment, and 
that's pretty useless, in their world.

> 
>> I asked them about the case where someone has TrueCrypt but doesn't
>> have a hidden volume, what would happen to someone doesn't have one?
>> Their response was, "Why would you do a dumb thing like that? The whole
>> point of TrueCrypt is to have a hidden volume, and I suppose if you
>> don't have one, you'll be sitting in a room by yourself for a long
>> time. We're not *stupid*."
> 
> That's good to know then. I never had anything *that* secret to protect,
> so never bothered to create a hidden volume. I just wanted a good, cheap
> encrypted volume solution where I could keep my tax records and other
> sensitive personal info. And if law enforcement ever requested the password
> for that, I wouldn't hesitate to hand it over if they had the proper
> subpoena / court order. But I'd be SOL when then went looking for a second
> hidden volume simply because one doesn't exist. Guess if I ever go out of
> the country with my laptop, I'd just better securely wipe that partion.

Or just put something in it that you can show. 

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-24 Thread Jon Callas

On Feb 24, 2012, at 5:43 PM, James A. Donald wrote:

> Truecrypt supports an inner and outer encrypted volume, encryption hidden 
> inside encryption, the intended usage being that you reveal the outer 
> encrypted volume, and refuse to admit the existence of the inner hidden 
> volume.
> 
> To summarize the judgment:  Plausibile deniability, or even not very 
> plausible deniability, means you don't have to produce the key for the inner 
> volume.  The government first has to *prove* that the inner volume exists, 
> and contains something hot.  Only then can it demand the key for the inner 
> volume.
> 
> Defendant revealed, or forensics discovered, the outer volume, which was 
> completely empty.  (Bad idea - you should have something there for plausible 
> deniability, such as legal but mildly embarrassing pornography, and a 
> complete operating system for managing your private business documents, 
> protected by a password that forensics can crack with a dictionary attack)
> 
> Forensics felt that with FIVE TERABYTES of seemingly empty truecrypt drives, 
> there had to be an inner volume, but a strong odor of rat is no substitute 
> for proof.
> 
> (Does there exist FIVE TERABYTES of child pornography in the entire world?)
> 
> Despite forensics suspicions, no one, except the defendant, knows whether 
> there is an inner volume or not, and so the Judge invoked the following 
> precedent.
> 
> http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf
> 
> That producing the key is protected if "conceding the existence, possession, 
> and control of the documents tended to incriminate" the defendant.
> 
> The Judge concluded that in order to compel production of the key, the 
> government has to first prove that specific identified documents exist, and 
> are in the possession and control of the defendant, for example the 
> government would have to prove that the encrypted inner volume existed, was 
> controlled by the defendant, and that he had stored on it a movie called 
> "Lolita does LA", which the police department wanted to watch.

There is no such thing as plausible deniability in a legal context.

Plausible deniability is a term that comes from conspiracy theorists (and like 
many things contains a kernel of truth) to describe a political technique where 
everyone knows what happened but the people who did it just assert that it 
can't be proven, along with a wink and a nudge.

But to get to the specifics here, I've spoken to law enforcement and border 
control people in a country that is not the US, who told me that yeah, they 
know all about TrueCrypt and their assumption is that *everyone* who has 
TrueCrypt has a hidden volume and if they find TrueCrypt they just get straight 
to getting the second password. They said, "We know about that trick, and we're 
not stupid."

I asked them about the case where someone has TrueCrypt but doesn't have a 
hidden volume, what would happen to someone doesn't have one? Their response 
was, "Why would you do a dumb thing like that? The whole point of TrueCrypt is 
to have a hidden volume, and I suppose if you don't have one, you'll be sitting 
in a room by yourself for a long time. We're not *stupid*."

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jon Callas
It was (2), they didn't wait.

Come on -- every one of these devices is some distribution of Linux that comes 
with a stripped-down kernel and Busybox. It's got stripped-down startup, and no 
one thought that it couldn't have enough entropy. These are *network* people, 
not crypto people, and the distribution didn't have a module to handle 
initial-boot entropy generation.

Period, that's it. It's not malice, it's not even stupidity, it's just 
ignorance.

The answer to "what were they thinking?" is almost always "they weren't."

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Applications should be the ones [GishPuppy]

2012-02-17 Thread Jon Callas

On Feb 17, 2012, at 12:14 PM, Jack Lloyd wrote:

> On Fri, Feb 17, 2012 at 11:33:15AM -0800, Jon Callas wrote:
> 
>> Really?
>> 
>> Let's suppose I've completely compromised your /dev/random and I
>> know the bits coming out. If you pull bits out of it and put them
>> into any PRNG, how is that not just Bits' = F(Bits) ? Unless F is a
>> secret function, I just compute Bits' myself. If F is a secret
>> function than the security is exactly the secrecy of F.  Jon
> 
> Sorry, perhaps I wasn't clear that my reference was to having
> additional entropy gathering code is also useful on platforms with a
> /dev/random, because your PRNG output is
>  F(Bits from /dev/random || Bits from somewhere else).
> 
> So I suppose in some sense this coincides with your second case, as
> one could view the above as F(Bits from /dev/random) where F is keyed
> with an input chosen from a non-uniform distribution, and certainly I
> concur that if you know or can easily guess both the entire output of
> /dev/random and the complete results of whatever ad-hoc system
> specific entropy gathering is available then you could in fact also
> guess the PRNG output. And I concur that if you know the /dev/random
> output then the security of the PRNG would rest entirely on the
> conditional entropy of the ad-hoc polling -- which is precisely my
> point as to why it is a useful approach, because it requires two
> things to fail instead of just one.
> 
> Additionally there is a more plausible case than you know exactly what
> bits my /dev/random will produce, which is that you know something
> about the probability distribution of the output that distinguishes it
> from uniform random. In that case, even F(Bits) could be useful if you
> are compressing down in size (eg transforming 2*N bits of input into N
> bits of key material).

Okay, I get it. That is *precisely* the case where you'd want to seed a local 
PRNG. The local PRNG is likely a good compression function as well. 

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Jon Callas

On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:

> On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
>  wrote:
>> If your /dev/urandom never blocks the requesting task irrespective of the
>> random bytes usage, then maybe your /dev/random is not as secure as it might
>> be (unless you have an high speed entropy source, but what is "high speed"
>> in this context?)
> 
> I'd like for /dev/urandom to block, but only early in boot.  Once
> enough entropy has been gathered for it to start it should never
> block.  One way to achieve this is to block boot progress early enough
> in booting by reading from /dev/random, thus there'd be no need for
> /dev/urandom to ever block.

I can understand why you might want that, but that would be wrong with a 
capital W. The whole *point* of /dev/urandom is that it doesn't block. If you 
want blocking behavior, you should be calling /dev/random. The correct solution 
is to have early-stage boot code call /dev/random if it wants blocking behavior.

(Note that I have completely ignored an argument of why blocking is rarely a 
good idea, which is the reason people call /dev/urandom. No one said software 
engineering was easy.)

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Applications should be the ones [GishPuppy]

2012-02-17 Thread Jon Callas

On Feb 17, 2012, at 4:55 AM, Jack Lloyd wrote:

> On Thu, Feb 16, 2012 at 09:41:04PM -0600, Nico Williams wrote:
> 
>> developers agree).  I can understand *portable* applications (and
>> libraries) having entropy gathering code on the argument that they may
>> need to run on operating systems that don't have a decent entropy
>> provider.
> 
> Another good reason to do this is resiliance - an application that
> takes some bits from /dev/(u)random if it's there, but also tries
> other approaches to gather entropy, and mixes them into a (secure)
> PRNG, will continue to be safe even if a bug in the /dev/random
> implementation (or side channel in the kernel that leaks pool bits,
> etc) causes the conditional entropy of what it is producing to be
> lower than perfect. I'm sure at some point we'll see a fiasco on the
> order of the Debian OpenSSL problem with /dev/random in a major
> distribution.

Really?

Let's suppose I've completely compromised your /dev/random and I know the bits 
coming out. If you pull bits out of it and put them into any PRNG, how is that 
not just Bits' = F(Bits) ? Unless F is a secret function, I just compute Bits' 
myself. If F is a secret function than the security is exactly the secrecy of F.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Jon Callas

On 16 Feb, 2012, at 3:30 AM, Bodo Moeller wrote:

> On Thu, Feb 16, 2012 at 12:05 PM, Werner Koch  wrote:
>  
> You are right that RFC4880 does not demand that the key expiration date
> is put into a hashed subpacket.  But not doing so would be stupid.
> 
> I call it a "protocol failure", you call it "stupid", but Jon calls it a 
> "feature" (http://article.gmane.org/gmane.ietf.openpgp/4557/).

That's not what I said. Or perhaps not what I meant.

I think it is indeed a feature that the expiry is a part of the certification, 
not part an intrinsic property of the key material. That permits you to do very 
cool things like rolling certification lifetimes.

Putting that into an unhashed packet is stupid, as Werner said.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Jon Callas

On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:

> The practical import is unclear, since there's (as far as is known) no
> way to predict or control who has a bad key.
> 
> To me, the interesting question is how to distribute the results.  That
> is, how can you safely tell people "you have a bad key", without letting
> bad guys probe your oracle.  I suspect that the right way to do it is to
> require someone to sign a hash of a random challenge, thereby proving
> ownership of the private key, before you'll tell them if the
> corresponding public key is in your database.

Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory and 
just construct your own oracle. It's a lot like rainbow tables in that once you 
learn the utility of the trick, you just replicate the results. If you 
implement something like the Certificate Transparency, you have an 
authenticated database of authoritative data to replicate the oracle with.

Waving my hand and making software magically appear, I'd combine Certificate 
Transparency and such an oracle be combined, and compute the status of the key 
as part of the certificate logs and proofs.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Jon Callas

On Feb 14, 2012, at 7:42 AM, ianG wrote:

> On 14/02/12 21:40 PM, Ralph Holz wrote:
>> Ian,
>> 
>> Actually, we thought about asking Mozilla directly and in public: how
>> many such CAs are known to them?
> 
> It appears their thoughts were "none."
> 
> Of course there have been many claims in the past.   But the Mozilla CA desk 
> is frequently surrounded by buzzing small black helicopters so it all becomes 
> noise.

I've asked about this, too, and the *documented* evidence of this happening is 
exactly that -- zero.

I believe it happens. People I trust have told me, whispered in my ear, and 
assured me that someone they know has told them about it, but there's 
documented evidence of it zero times.

I'd accept a screen shot of a cert display or other things as evidence, myself, 
despite those being quite forgeable, at this point.

Their thoughts of it being none are reasonably agnostic on it.

Those who have evidence need to start sharing.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Proving knowledge of a message with a given SHA-1 without disclosing it?

2012-02-01 Thread Jon Callas

On Feb 1, 2012, at 1:49 AM, Francois Grieu wrote:

> The talk does not give much details, and I failed to locate any article
> with a similar claim.
> I would find that result truly remarkable, and it is against my intuition.
> 
> Any info on the Hal Finney protocol, or a protocol giving a similar
> result, or the (in)feasibility of such a protocol?

As I remember Hal's protocol, it requires about eight megabytes of data to be 
transferred back and forth to prove that you know the SHA1 hash. It's not so 
much to be obviously absurd, but not efficient enough to be something you'd 
want to do often.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-30 Thread Jon Callas
Noon, 

When we say something is snake oil, it is a colloquialism that means not that 
the technology is unworkable, but that the claims are unjustified. 

For example, Vitamin C is not snake oil. But the claim that Vitamin C will cure 
cancer is. 

I agree with you that QKD -- and all Quantum Information Science -- is an 
exciting area of research. I in no way think that research money should be 
denied to them and I hope they come up with something cool and practical. 

But the answer to your question asking for QKD products that are not snake oil 
is the null set. There aren't any. 

This isn't because the theory or technology is crap. On the contrary, there are 
a number of interesting QKD systems built and deployed. They are snake oil 
because of the absurd claims that the cheerleaders make. They are doing 
something not unlike dropping some cancer cells into a test tube of ascorbic 
acid and then saying that someday soon Vitamin C will replace all cancer drugs. 

Among the preposterous claims made about QKD, there are:

* QKD is perfect security. There is no such thing as perfect security. Really, 
this just ought to QKD supporters blush. It's shooting snakes in a barrel. 

There are some practical aspects of this obviousness that are perhaps a bit 
in-obvious. Even assuming theoretic correctness of QKD, there is essentially no 
engineering knowledge of how to assure classes of systems have no practical 
problems, let alone manufacturing flaws in samples. We don't now how to test a 
deployment nor verify that a running system is running correctly. In contrast, 
we actually know a lot about the warts in a mathematical crypto system. The 
pissing and moaning that folks like us regularly give about crypto is an 
indication that the discipline is reasonably well-defined. We know enough to 
know a lot about what we don't know. 

* QKD will replace mathematical cryptography. Even backing this off to "could" 
as we've all pointed out, the economics of the situation will always favor the 
math. Take the very same dedicated glass fiber they put the QKD system on and 
replace it with an IPSec tunnel. It's cheaper. Ian makes this economic argument 
quite strongly. It is hard to see the circumstance when one would use QKD even 
working as advertised. I think this drives some of the absurd claims I mention 
above, and that itself tends towards snake oil. 

* A combination of ignorance and arrogance. QKD is so caught up in the tech 
that it ignores the security. For example, the problem of denials of service 
are elided away. The most magical thing about QKD is that a potential 
eavesdropper causes the bits to melt away like the smile of a Cheshire Cat. But 
what if your attacker thinks that disruption is good enough? 

QKD addresses only the problem of information in motion. It is only 
communications security, not storage security. (Which is another reason that 
the claim that QKD can replace math is so herpetoleogenous.) Even in COMSEC, 
there are difficulties of authenticity, group communications, routing, and so 
on. Cryptography is not just point-to-point communications between trusted 
endpoints. 

Compare this with what's going on in particle physics and cosmology, such as 
the search for the Higgs Boson and (separately) dark matter. There is 
excitement and drama that one only sees a few times a century. Last month 
supersymmetry seems on the outs, this month its back in again, depending on 
what the data says. The quest for dark matter is so all over the place that you 
know this is real science. 

To repeat myself from my previous missive, QKD proponents well seem to think 
that disagreement means a lack of understanding, or hostility to the 
proposition, or perhaps even a hostility to the very idea of scientific 
research. These a themselves the speech patterns of proponents of snake oil and 
beyond into things I'll just call "fringe" science. When people play gotcha 
over language and explain away experiments, it contributes to the funny smell. 

I hope this helps explain our harrumphing. 

Jon 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-27 Thread Jon Callas

On Jan 27, 2012, at 5:22 PM, Noon Silk wrote:

> 
> So why didn't one of these "real world" people point this out, to
> researchers? It's a bit too easy to claim something as obvious when
> someone just told you.

There are any number of us who have been quantum skeptics for years, and the 
responses that have come back to us have been essentially that the fact that we 
were skeptical showed ipso facto that we didn't know what we were talking 
about. The quantum folks have just insisted that doubting quantum cryptography 
was like doubting evolution or gravity.

Nonetheless, as prettily fragrant as the schadenfreude is this evening, I'm not 
sure I buy this paper, either. I'm immediately reminded of Clarke's First Law. 
(Not the technology and magic one, but one about elderly and distinguished 
scientists making predictions.)

The quantum crypto people have earned contempt from us math people by 
high-handedly dismissing any operational concerns, by fake competition -- 
insisting on the false dilemma that quantum and mathematical techniques are 
product and technological competitors, and even in the very *word* 
"cryptography." Quantum cryptography is not cryptography. It is an amazing bit 
of physics. In the last few years, they've backed off to "quantum key 
distribution" but "quantum *secrecy*" is not only more accurate, less snake 
oil, and far cooler than either of the terms.

Heck, just this week, an article "Quantum mechanics enables perfectly secure 
cloud computing" showed up on physorg.com at 
.
 It manages to put the same snake oil into the very headline by using the word 
"perfect." It's been a relatively few days since I read something else where 
they were claiming that devices to do quantum crypto to mobile devices are 
around the corner, unironically including the trusted third party in the middle 
that acts as a key router. That one's perfect, too.

I can hardly wait to see the rebuttals to this paper.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-27 Thread Jon Callas

On Dec 27, 2011, at 12:54 PM, Jeffrey Walton wrote:

> Hi All,
> 
> We're bouncing around ways to enforce non-similarity in passwords over
> time: password1 is too similar too password2 (and similar to
> password3, etc).
> 
> I'm not sure its possible with one way functions and block cipher residues.
> 
> Has anyone ever implemented a system to enforce non-similarity business rules?

What's your goal? Is it to make passwords really, really unguessable by any 
means other than brute force?

If it is, then I agree with Steve Bellovin. A Bloom filter can minimize the 
usability of your system and cause the greatest likelihood that your users will 
forget their new password and need it reset. After a while, they'll give up and 
go somewhere else.

If you want to speed up the process, be sure to track down all the databases of 
passwords from hacked web sites and compare the user's new passwords to those. 
I also recommend using some regexps in there, too. Be sure to make sure that it 
doesn't follow with something like [0-9\-!@#$%^&*,./]{1,12} as well, because 
that's easily hacked. Any password in those databases is equivalent to the null 
string, for security purposes and they really need a 12-character password for 
top-notch security anyway. Be sure also to keep the last hundred used 
passwords. With a policy using advanced similarity testing, it will be easier 
for someone to change it a half-dozen times than create a new one, and you want 
to prevent that. 

Be sure also to force a password change for everyone any time there's a 
personnel change in your IT organization, because those people could have 
walked off with everyone's password.

Do your goals include the user experience at all? Are these people your 
customers? Are you in an industry where you have competitors? If so, maybe you 
don't want to do any enforced password changes. First of all, every time they 
get annoyed, they there's a chance they'll go to a competitor. Second of all, 
there are very good reasons that many security people (including me) think that 
enforced password changes actually lower security. Whether they do or not, a 
change sends the message that the user's own personal security is better than 
yours. I suppose one could debate that one, too, but I'm on that side, myself.

On the other hand, if they're your employees, screw them. I mean, they should 
just be thankful to have a job at all, and the increased help tickets -- 
particularly ones that can be quickly resolved, like lost passwords -- make 
your metrics better.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-18 Thread Jon Callas

On Dec 18, 2011, at 10:19 AM, M.R. wrote:

> On 2011-12-07 16:31, Jon Callas wrote:
>> There are many things about code signing that I don't think I understand.
> 
> same here.
> 
> But I do understand something about the code creation, dissemination
> and the trust between code creator and code user ("primary parties"),
> and the role of the operating system vendor (a "tertiary party") as
> an intermediary between the code creator and the code user.
> 
> With that said, I propose that "code signing" and then enforcing some
> kind of "use sanctioning" protocol by the operating system vendor is
> an idiotic idea, and fortunately one that has been proven as completely
> impractical and ill-aligned with the interest of the two primary parties, and 
> thus continually rejected in practice.
> 
> What should be "signed" and "tusted" (or not trusted) is not the code,
> but the channel by which the code is distributed.

Which is precisely what can't be done, in the general case.

It's really, really, doable in the singular case. If the channel signs the code 
(which is what Apple does on the App Store), then sure, Alice is your auntie. 

But when developer D has code they sign *themselves* with a cert given from 
signatory S, and delivered to marketplace M, you end up with some sort of 
DSM-defined insanity. There's no responsibility anywhere. The worst, though, is 
to go to the signer and say, "This is another fine mess you've gotten me into, 
Stanley."

Jon
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-11 Thread Jon Callas
On 10 Dec, 2011, at 11:58 PM, Peter Gutmann wrote:

> Jon Callas  writes:
> 
>> If someone actually built such combination of OS and marketplace, it would
>> work for the users very well, but developers would squawk about it. Properly
>> done, it could drop malware rates to close to nil.
> 
> Oh, developers would do more than squawk about it.  Both Java and .NET
> actually support the capability-based security that you mentioned, but it's so
> painful to use that it's either turned off by default (.NET's 'trust
> level="Full"') or was turned off after massive developer backlash (Java).
> Even the very minimal capabilities used by Android are failing because of the
> dancing bunnies and confused deputy problems, and because developers request
> as close to any/any as they can get just in case (exacerbating the confused
> deputy problem).
> 
> (One of the nice things about Android is that it's fairly easy to decompile
> and analyse the code, so there have been all sorts of papers published on its
> capability-based security mechanisms using this technique.  It's serving as a
> nice real-world empirical evaluation of failure modes of capability-based
> security systems.  I'm sure someone could get a good thesis out of it at some
> point).
> 
>> Properly done, it could drop malware rates to close to nil.
> 
> Objection, tautology: Properly done, any (malware-related) security measure
> would drop malware rates close to nil.  The problem is doing it properly...
> 

Yes, doing it properly is the key and I'll assert that Apple is doing a pretty 
good approximation of it. They are doing more or less what I described -- good 
coding enforcement backed up with digital signatures. There are plenty of 
people squawking about it. I know developers who've thrown up their hands and 
there is plenty of grumpiness I've heard. Some of it reasonable grumpiness, too.

But the end result for the users is that malware rate is close to zero. The 
system is by no means perfect, and has side-effects. But the times when 
something slipped through the net are so few that they're notable still. (And 
some of the malware has been kinda charming, like the flashlight app that had a 
hidden SOCKS proxy that let people use it for tethering.) More importantly, the 
system does not throw things at the users that they're incapable of handling, 
like the Android way of just informing you what capabilities an app needs. 
People can and do just hand devices to their kids and let them use them with no 
ill effects.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-10 Thread Jon Callas

On 9 Dec, 2011, at 9:15 PM, Peter Gutmann wrote:

> Jon Callas  writes:
> 
>> If it were hard to get signing certs, then we as a community of developers
>> would demonize the practice as having to get a license to code.
> 
> WHQL is a good analogy for the situations with certificates, it has to be made
> inclusive enough that people aren't unfairly excluded, but exclusive enough
> that it provides a guarantee of quality.  Pick any one of those two.
> 
> (I have a much longer analysis of this, a bit too much to post here, but
> there's a long history of vendors gaming WHQL and the certifiers looking the
> other way, just as there is with browser vendors looking the other way when a
> CA screws up, although in the case of hardware vendors the action is
> deliberate rather than accidental).

Sure, and that's why the assurance system and the signatures have to be tied 
together and the incentives have to be aligned. In a software market where the 
app store itself is doing the validation, doing the enforcement, signing the 
code, and taking the responsibility for both delivering the software and 
backfilling the inevitable errors, you'll see the *system* lower malware. But 
even in that, it's the system that's doing it, not digital signatures. The 
signatures are merely the wax seals. The quality system has to be built to 
create and deliver quality. That is the sine qua non of this whole thing.

I think we agree that trying to build quality by giving certificates to 
developers is a fantasy at best.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-10 Thread Jon Callas

On 9 Dec, 2011, at 2:08 PM, Steven Bellovin wrote:

> 
> On Dec 9, 2011, at 3:46 18PM, Jon Callas wrote:
> 
>> 
>> On 8 Dec, 2011, at 8:27 PM, Peter Gutmann wrote:
>> 
>>> In any case getting signing certs really isn't hard at all.  I once managed 
>>> it 
>>> in under a minute (knowing which Google search term to enter to find caches 
>>> of 
>>> Zeus stolen keys helps :-).  That's as an outsider, if you're working 
>>> inside 
>>> the malware ecosystem you'd probably get them in bulk from whoever's 
>>> dealing 
>>> in them (single botnets have been reported with thousands of stolen keys 
>>> and 
>>> certs in their data stores, so it's not like the bad guys are going to run 
>>> out 
>>> of them in a hurry).
>>> 
>>> Unlike credit cards and bank accounts and whatnot we don't have price 
>>> figures 
>>> for stolen certs, but I suspect it's not that much.
>> 
>> If it were hard to get signing certs, then we as a community of developers 
>> would demonize the practice as having to get a license to code.
>> 
> Peter is talking about stolen certs, which for most parts of the development
> community aren't a prerequisite...  But there's an interesting dilemma here
> if we insist on all code being signed.
> 
> Assume that a code-signing cert costs {$,€,£,zorkmid}1/year.  Everyone but
> large companies would scream.  Now assume the cost is {$,€,£,zorkmid}.01/year
> or even free.  At that price, it's a nuisance factor, and would be issued via
> a simple web interface.  Simple web interfaces are scriptable (and we all know
> the limits of captchas), which means that malware could include a "get a cert"
> routine for the next, mutated generation of itself.  In fact, they're largely
> price-insensitive, since they'd be programmed with a stash of stolen credit
> cards

Well said, Steve, and that's largely my point. Any code signing system that 
wants to survive the scrutiny of people like us has to essentially hand out 
certs for free. (And there are regimes that in fact do that, and work well.)

Therefore, you must assume that any given cert might have been stolen or simply 
issued to a bad person. The sketch I gave previously about how code-signing is 
useful even with zero trust on the signing certs. You detect malware by 
signatures, hashes, and keys, and *then* by content scanning, and build up 
whitelists and blacklists. It isn't perfect, but it works better than mere 
content scanning alone.

Any code-signing system that assigns worth to the developers based upon 
certificate issuance is effectively agreeing with those who think that 
evil-doing can be stopped by ID cards and an attendant ban on 
anonymity/pseudonymity.

There is, however, a way to surpass the loosie-goosy signature system. If the 
software marketplace itself were to issue signatures, then you'd have a way to 
get an improvement. You'd really want to back up the mere digital signature 
with an assurance system. If the marketplace enforced code reviews, and backed 
up the code reviews with a capability-based OS so that they could enforce some 
practices (for example, you could keep a Disgruntled Birds game from turning on 
the microphone and camera with capabilities and a code review), then you would 
expect such a software marketplace to have a dramatically lower rate of 
malware. 

They'd have that lower malware rate because in that system, the digital 
signature is an assurance mark on top of an actual assurance system. The debate 
that we're having boils down to wringing our hands over how to make an 
assurance mark that assures quality with no underlying assurance evaluation and 
enforcement.

If someone actually built such combination of OS and marketplace, it would work 
for the users very well, but developers would squawk about it. Properly done, 
it could drop malware rates to close to nil.

But anyway, you're absolutely right, and that's really my point. A code signing 
system that operates on its own has to assume that certs get lost, stolen, or 
handed out to bad (or misguided) people, and have that as part of its threat 
model.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Jon Callas

On 7 Dec, 2011, at 1:32 PM, Peter Gutmann wrote:

>  writes:
> 
>> Another wrinkle, at least as a logic problem, would be whether you can revoke
>> the signing cert for a CRL and what, exactly, would that mean
> 
> That's actually a known problem (at least to PKI people).  So what you're
> really asking is whether a self-signed root cert can revoke itself, since a
> lower-level cert can always be revoked by a higher-level one:
> 
>  The handling of CA root certificates is particularly problematic because
>  there's no effective way to replace or revoke them.  Consider what would be
>  required to revoke a CA root certificate.  These are self-signed, which
>  means that the certificate would be revoking itself.  In the presence of
>  such a revocation applications can react in one of three ways: they can
>  accept the CRL that revokes the certificate as valid and revoke it, they can
>  reject the CRL as invalid because it was signed by a revoked certificate, or
>  they can crash (and some applications will indeed crash in this situation).
>  Since revocation of a self-signed certificate is the PKI version of
>  Epimenedes paradox "All Cretans are liars" and PKI applications are unlikely
>  to be coded to deal with self-referential paradoxes, crashing is a perfectly
>  valid response.

Maybe this is syntactically true, or even code-wise true, but this sounds 
crazed.

OpenPGP has the same problem, since all users are CAs, and revocation has to 
come from a cert itself (or a delegated revoker).

If you have a certificate issue a revocation for itself, there is an obvious, 
correct interpretation. That interpretation is what Michael Heyman said, and 
what OpenPGP does. That certificate is revoked and any subordinate certificates 
are also implicitly revoked. It's also like making a CRL for everything you 
issued.

If a software implementation did any of the other things, like crash, it's 
pretty obviously a bug. If a developer defended crashing or accepting any 
relevant certs on the grounds of it not being a well-formed first order logic, 
we'd yell at that developer.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Jon Callas

On 8 Dec, 2011, at 8:27 PM, Peter Gutmann wrote:

> In any case getting signing certs really isn't hard at all.  I once managed 
> it 
> in under a minute (knowing which Google search term to enter to find caches 
> of 
> Zeus stolen keys helps :-).  That's as an outsider, if you're working inside 
> the malware ecosystem you'd probably get them in bulk from whoever's dealing 
> in them (single botnets have been reported with thousands of stolen keys and 
> certs in their data stores, so it's not like the bad guys are going to run 
> out 
> of them in a hurry).
> 
> Unlike credit cards and bank accounts and whatnot we don't have price figures 
> for stolen certs, but I suspect it's not that much.

If it were hard to get signing certs, then we as a community of developers 
would demonize the practice as having to get a license to code.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas

On 7 Dec, 2011, at 11:34 AM, ianG wrote:

> 
> Right, but it's getting closer to the truth.  Here is the missing link.
> 
> Revocation's purpose is one and only one thing:  to backstop the liability to 
> the CA.

I understand what you're saying, but I don't agree.

CAs have always punted liability. At one point, SSL certs came with a huge 
disclaimer in them in ASCII disclaiming all liability. Any CA that accepts 
liability is daft. I mean -- why would you do that? Every software license in 
the world has a liability statement in it that essentially says they don't even 
guarantee that the software contains either ones or zeroes. Why would 
certificates be any different?

I don't think it really exists, not the way it gets thrown around as a term. 
Liability is a just a bogeyman -- don't go into the woods alone at night, 
because the liability will get you!

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas
> Originally, public key systems were said to possess deliver this property of 
> 'nonrepudiation', meaning a digital signature could effectively authenticate 
> the intent of the party associated with the private key. However, today such 
> a large percentage of endpoint systems (on which the private keys are held) 
> are infected with info-stealing malware that most everyone has plausible 
> deniability about what is signed with their private keys. (Exceptions being 
> perhaps hardware systems that have not been hacked yet and "trust" vendors 
> whose organizations are specifically built on their expertise at handling 
> private keys.)
> 
> So current revocation schemes attempt to preserve nonrepudiation in an 
> attempt to make digital signatures more like binding ink signatures on a 
> contract.
> 
> But automated systems checking for signatures are usually authenticating 
> server certs or validating signed code for execution. In these cases, we 
> definitely need the party who has been compromised to be able to repudiate 
> the evil things that have been been signed by their private key.
> 
> So it seems to me that PKI systems were designed with some sort of 
> leagalistic contract-binding model in mind, when in turns out in practice 
> that security (even of ecommerce transactions) depends more on an efficient 
> repudiation mechanism than the prevention of it!

Marsh, you've hit on a few good points.

The main one is that one of the original purposes of digital signatures is to 
make it possible to sign a contract between parties that are not physically 
present. That actually works quite well. But there's been mission creep into 
absurdity and that happened nearly immediately in the development of digital 
signatures.

Nonreputiation is one of these. I think that the very idea of nonrepudiation 
goes back to Leibniz, who thought we could get rid of judges and solve disputes 
with, "Gentlemen, let us calculate!" That isn't going to happen, and we only 
have to wave towards Messrs. Russell, Whitehead, Goedel, and Turing (Hi, guys!) 
and move on.

Nonrepudiation is a somewhat daft belief. Let me give a gedankenexperiment. 
Suppose Alice phones up Bob and says, "Hey, Bob, I just noticed that you have a 
digital nature from me. Well, ummm, I didn't do it. I have no idea how that 
could have happened, but it wasn't me." Nonrepudiation is the belief that the 
probability that Alice is telling the truth is less than 2^{-128}, assuming a 
3K RSA key or 256-bit ECDSA key either with SHA-256. Moreover, if that 
signature was made with an ECDSA-521 bit key and SHA-512, then the probability 
she's telling the truth goes down to 2^{-256}.

I don't know about you, but I think that the chance that Alice was hacked is 
greater than 1 in 2^128. In fact, I'm willing to believe that the probability 
that somehow space aliens, or Alice has an unknown evil twin, or some mad 
scientist has invented a cloning ray is greater than one in 2^128. Ironically, 
as the key size goes up, then Alice gets even better excuses. If we used a 
1k-bit ECDSA key and a 1024-bit hash, then new reasonable excuses for Alice 
suggest themselves, like that perhaps she *considered* signing but didn't in 
this universe, but in a nearby universe (under the many-worlds interpretation 
of quantum mechanics, which all the cool kids believe in this week) she did, 
and that signature from a nearby universe somehow leaked over. 

This absurd-excuse paradox means that if you *really* believe in 
non-repudiation, you need not only to avoid keys that are too small, but too 
large.

Now, in the real world, Alice might repudiate the signature, but pay Bob 
anyway. Or Bob might just accept Alice's excuse because there are reasonable 
chances something odd happened (like Alice got hacked). Or Bob might take Alice 
to court, where a judge or jury would access a constellation of things 
including the reasonableness of the contract, Alice and Bob's individual 
reputations, and also some defaults (a five-dollar charge might be presumed to 
be disputable, and a million-dollar property purchase assumed to not be 
disputable).

We got to this problem through some reasonable and unreasonable natural human 
things. We inherently distrust new technologies. There was a time when you 
couldn't fax a legal document. Then we got used to it. Today, most places will 
accept an emailed PDF of a scan of a document, but not all. There are a few 
amusing situations where you take a scan, print it, then fax the paper and it's 
a legal document, but not that PDF itself, either digitally signed or not.

Nonrepudiation is really an argument that this math combined with some rituals 
make bits as good as a fax.

Intent is another good point. Contract law and practice has intent wired 
through it all over the place. Trust is also a huge can of worms, as well as 
possibly not even being definable.

If we step back, though, this is similar to the code-signing discussion in that 
there'

Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas
>> 
>> I think it is a policy question. If I were making a software development 
>> system that used certificates with both expiration dates and revocation, I 
>> would check both revocation and expiry. I might consider it either a warning 
>> or an error, or have it be an error that could be overridden. After all, how 
>> can you test that the revocation system on the back end works unless you can 
>> generate revoked software?
> 
> I'm not sure what you mean.

By policy, I mean that you decide what it's supposed to mean, which is what you 
get to at the end of this. 

But in the rest of paragraph reflects that I am a systems developer and if I am 
trying to debug why something that is revoked is (or isn't) working when it 
shouldn't (or should), then I have to create things that are error conditions.

I also once worked on a secure microprocessor, and there were many ways to 
permanently kill it. 

>> 
>> On a consumer-level system, I might refuse to install or run revoked 
>> software; that seems completely reasonable. Refusing to install or run 
>> expired software is problematic -- the thought of creating a system that 
>> refuses to work after a certain date is pretty creepy, and the workaround is 
>> to set the clock back. 
> 
> Yup.  In fact, it's more than creepy, it's an open invitation to Certain 
> Software Vendors to *enforce* the notion that you just rent software.

I know I wouldn't buy such a system.

>> 
>> But really, it's a policy question that needs to be answer by the creators 
>> of the system, not the crypto/PKI people. We can easily create mechanism, 
>> but it's impossible to create one-size-fits-all policy.
>> 
> Right now, I'm speaking abstractly.  I'm not concerned with current PKIs or 
> pkis or business models or what have you.  If you'd prefer, I'll rephrase my 
> question like this: Assume that there is some benefit to digitally-signed 
> code.  (Note carefully that I'm not interested in how the recipient gets the 
> corresponding public key -- we've already had our "PKI is evil discussion" 
> for the year.)  Given that there is a non-trivial probability that the 
> private signing key will be compromised, what are the desired semantics once 
> the user learns this.  (Again, I'm saying nothing about how the user learns 
> it -- CRLs or OSCP or magic elves are all (a) possible and (b) irrelevant.)  
> If the answer is "it depends", on what does it depend?  Whose choice is it?  
> 
> Let's figure out what we're trying to accomplish; after that, we can try to 
> figure out how to do it.

I think that's the central problem we're dealing with. There is scads of 
mechanism and little policy.

I also don't think we're going to agree on what policy should be, except within 
limited contexts.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas

On 7 Dec, 2011, at 8:52 AM, Steven Bellovin wrote:

> 
> On Dec 7, 2011, at 11:31 23AM, Jon Callas wrote:
>> 
>> 
>> But really, I think that code signing is a great thing, it's just being done 
>> wrong because some people seem to think that spooky action at a distance 
>> works with bits.
> 
> 
> The question at hand is this: what is the meaning of expiration or revocation
> of a code-signing certificate?  That I can't sign new code?  That only affects
> the good guys.  That I can't install code that was really signed before the
> operative date?  How can I tell when it was actually signed?  That I can't
> rely on it after the specified date?  That would require continual resigning
> of code.  That seems to be the best answer, but the practical difficulties
> are immense.

I want to say that the answer is "mu" because you can't actually revoke a 
certificate. That's not satisfying, though.

I think it is a policy question. If I were making a software development system 
that used certificates with both expiration dates and revocation, I would check 
both revocation and expiry. I might consider it either a warning or an error, 
or have it be an error that could be overridden. After all, how can you test 
that the revocation system on the back end works unless you can generate 
revoked software?

On a consumer-level system, I might refuse to install or run revoked software; 
that seems completely reasonable. Refusing to install or run expired software 
is problematic -- the thought of creating a system that refuses to work after a 
certain date is pretty creepy, and the workaround is to set the clock back. 

But really, it's a policy question that needs to be answer by the creators of 
the system, not the crypto/PKI people. We can easily create mechanism, but it's 
impossible to create one-size-fits-all policy.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas
There are many things about code signing that I don't think I understand.

I think that code-signing is a good thing, and that all things being equal, 
code-signing is a good thing, and that code should be signed.

However, there seems to strange, mystical beliefs about it.

As an example, there's the notion that if you have signed code and you revoke 
the signing key (whatever revoke means, and whatever a key is) then the 
software will automagically stop working, as if there's some sort of quantum 
entanglement between the bits of the code and the bits of the key, and 
invalidating the key therefore invalidates the code.

This seems to me to be daft -- I don't see how this *could* work in a general 
case against an attacker who doesn't want that code to stop working (and that 
attacker could be either a malware writer or the owner of the computer). I can 
see plenty of special cases where it works, but it is fundamentally not 
reliable and a security system that wants to stop malware or whatever by 
revoking keys is even less reliable because we now have three or four parties 
(malware writer, machine owner, certifier, anti-virus maker).

It also seems to me that discussions on this list hit this situation from two 
strange directions. One is the general sneering at the daft belief. The other 
is continuing to discuss it. I don't care who is using it (even effectively); 
we're all smart enough to know both that DRM cannot work, and yet there are 
users of it that are happy with it. Whatever.

Slightly tangential to this is a discussion of expiration of signing keys. In 
reality, they don't expire. Unless you you make a device that can be 
permanently broken by setting the clock forward (which is certainly possible, 
merely not desirable), then expiry can be hacked around. The rough edge of what 
happens to code that expires while it is executing generalizes out to a set of 
other problems that just show that in fact, you can't really expire a code 
signing key any more than you can revoke it -- that is to say there are many 
edge conditions in which it works and many of these are useful to some people 
and some circumstances, but in the general case, it doesn't and cannot work.

But that doesn't mean that code signing is a bad thing. On the contrary, code 
signing is very useful because you can use the key, the signature, or the hash 
as a way to detect malware and form a blacklist, as well as detect software 
that should be whitelisted.

Simply stated, an anti-malware scanner can detect (and remove) a specific piece 
of malware by the simple technique of comparing its signature to a blacklist. 
It can compare a single object's hash to a list of hashes and that only 
requires the scanner to hash the code object; this catches the simple case of 
malware that is merely re-signed with a new key. It also permits it to do more 
complex operations than a simple hash (like hashing pieces, or hash at 
different times) to identify a piece of malware. It can also use the key to 
detect whole classes of malware (or good-ware).

Code signing is good because it gives the anti-malware people a set of tools 
that augment what they have with some easy, fast, effective ways to categorize 
software as known goods or known bads. 

But that's it -- you don't get the spooky action at a distance aspects that 
some people think you can do with revocation. You get something close, if you 
feed the blacklist/whitelist information to whatever the code-scanner is. 
Nonetheless, this answers how you deal with signed malware (once it's known to 
be malware, you stop it via signature), or bogus 512-bit signing keys (just 
declare anything signed by such to either be treated as malware or as unsigned).

So am I missing something? I feel like I'm confused about this discussion 
because *of* *course* you can't revoke a key and have that magically transmit 
to software. Perhaps some people believe that daft notion and have built 
systems that assume that this is true. So what? Maybe it works for them. The 
places where it doesn't work aren't even interesting. Perhaps observing when 
this daft notion meets the real world is helpful as an object lesson. Perhaps 
it works for *them* but not *us*.

But really, I think that code signing is a great thing, it's just being done 
wrong because some people seem to think that spooky action at a distance works 
with bits.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-06 Thread Jon Callas

On 6 Dec, 2011, at 3:43 AM, ianG wrote:

> The promise of PKI in secure browsing is that it addresses the MITM.  That's 
> it, in a nutshell.  If that promise is not true, then we might as well use 
> something else.

Is it?

I thought that the purpose of a certificate was to authenticate the server to 
the client. This is a small, but important difference. If you properly 
authenticate the server, then (one hopes) that we've tacitly eliminated both an 
impersonation attack and a MiTM (an MiTM is merely a real-time, two-way 
impersonation).

The problem is that we're authenticating the server by naming, and there are 
many entities with a reason to lie about names. There are legitimate and 
illegitimate reasons to lie about names, and while we know that it's going on, 
we don't have a characterization of what reality even *is*.

We're seeing this in this very discussion. I also want to see proof that this 
is going on. I know it is, but I want to see it. These bogus certs are a lot 
like dark matter -- we know they're there, but we have little direct 
observation of them.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-03 Thread Jon Callas

On Dec 2, 2011, at 7:27 PM, Marsh Ray wrote:

> On 12/01/2011 05:31 PM, Jon Callas wrote:
>> 
>>  for (i = 0; i<  min(digest.length, secret.length); i++) {
>> failure |= (digest[i] != hash[i]);   // Check each byte for 
>> non-match
>>  }
>> 
>>  return failure == 0;   // return true if we didn't fail. Yeah, 
>> confusing.
> 
> Again, the problem with this is a sufficiently smart compiler may optimize 
> this into a shortcut loop termination at the first mismatch. Or so I hear.
> 
> I'd look closely at DJB's library.

I'd doubt it, but more of a reason to use the XOR trick, or even just inline 
everything.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] No one bothers cracking the crypto (real life edition)

2011-12-01 Thread Jon Callas
http://pauldotcom.com/2011/11/cracking-md5-passwords-with-bo.html

"BozoCrack is a depressingly effective MD5 password hash cracker with almost 
zero CPU/GPU load. Instead of rainbow tables, dictionaries, or brute force, 
BozoCrack simply finds the plaintext password. Specifically, it googles the MD5 
hash and hopes the plaintext appears somewhere on the first page of results.

It works way better than it ever should."

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Newbie Question

2011-12-01 Thread Jon Callas

On Dec 1, 2011, at 8:43 PM, Randall Webmail wrote:

> From: "ianG" 
> 
> >It does store certs.  It just takes above & beyond to get at them.  
> Unknown whether it stores certs that you reject.
> 
> I spend a lot of time in hotels, and it is VERY common for me to get one of 
> those popups complaining about certificates when I connect to the hotel WiFi.
> 
> I am an almost-complete greenie WRT crypto, which is why I'm here to learn.
> 
> What is the proper thing to do when one of those things pops up?   (It is NOT 
> a rare event).
> 
> I use the "https everywhere" firefox extension on my OSX laptop.   I do not 
> access my bank accounts on public WiFi, but I really don't have a choice but 
> to access webmail and gmail.What should I do when I get one of those cert 
> warnings?

Click "Cancel" and then try again.

The usual reason for the message is that some network client has bumped up 
against the captive portal and gotten either a network error or something that 
is an HTTP response and thus a completely protocol illegal answer. They then 
interpret it as an SSL error when it's really nothing but the captive portal.

But you want to click cancel, because if there's someone who wants to hack you, 
that's how they'd do it.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-01 Thread Jon Callas
On Dec 1, 2011, at 3:53 PM, Alfonso De Gregorio wrote:

> 
> If the attacker has direct control over the challenge/digest, the side
> channel may turn to be observable. The attacker could query adaptively
> the authentication server and exploit the timing information to
> recover the hashed secret - gaining access. If the hash is not salted,
> a secret preimage can be found with a TMTO attack.
> 

Potentially yes, indeed. But the logic that you use to prevent that might also 
have timing issues.

If I were writing in C, I might do something slightly evil like just compare 16 
bytes regardless, but that could give problems in a language like Java, which 
might take an exception if the challenge is short. There's the additional 
problem that unless you compare an algorithm ID, too, there's the chance that 
you'd get a cross-hash collision (one were the first 16 bytes of SHA256 matches 
the MD5), even. I didn't even address the question of why MD5 was being used 
for this without an HMAC, as I took that as a constraint.

It also occurred to me that there are architectures where 
comparison/subtraction isn't constant time (a negative result takes an extra 
micro-op) and if you're really anal, you should use an idiom like:

failure |= x ^ y;

and compare to zero at the end. You could even do this with sizes larger than a 
byte, if you can somehow cast a byte array into something larger, and then just 
inline the whole thing.

So it's really a more involved question than it appears on first blush -- and 
that's why crypto is hard!

Jon




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   >