Re: [cryptography] Lavabit's and Snowden's Solos

2016-03-06 Thread Jeffrey Goldberg
On 2016-03-05, at 5:17 AM, John Young <j...@pipeline.com> wrote:

> Lavabit's brief for Apple has the gutsiest skin in the game, going solo,
> no joining a pack.

It is indeed the LavaBit case that terrifies me. And while I and other
people who work of AgileBits made personal statements, there was no
company response. In contrast, we did send out one Tweet about
Apple/FBI. https://twitter.com/1Password/status/700059313599983616

(I wasn’t available to help with the wording of that, so yeah, I know
that it actually isn’t about encryption.

For what it is worth, that tweet went out well before the ...

> […] the fattest of strutting corporate cats […]

started defending Apple.

Again, I wasn’t available for that decision making, but I wouldn’t
be surprised if we were more “courageous" here than with LavaBit
exactly because there is safety in standing with a big popular
influential corporate giant than in making a strong public
declaration about LavaBit.

So as much as you might wish to condemn our selectiveness here, you
should also look at this more positively. There was a lot of unexpressed
horror about LavaBit that is now being expressed as the more convenient
opportunity as come along.

I also think that it is because of LavaBit that we have all been watching
out for the next case. We lost one round, we certainly weren’t go let the
next one go down without a fight.

I think the Feds made a tactical error in picking on Apple at this time.
They should have done more LavaBit-esque things against smaller entities
to establish more cases. On the other hand, the “perfect” terrorism case
fell in their laps, so they jumped on that one.

Cheers,

-j

–- 
Jeffrey Goldberg
Chief Defender Against the Dark Arts @ AgileBits


smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Hi all, would like your feedback on something

2015-12-29 Thread Jeffrey Goldberg
On Dec 23, 2015, at 2:18 AM, Brian Hankey  wrote:
> 
> I sent a long winded reply that has been stuck in moderation for a couple of 
> days

I believe that this is because your are sending email with a text/html part. 
Most mailing lists will reject such things.

>> Ah, so you want the user to remember something specific for each site.

> No… remember a rule that can be used to transform the name of the site in 
> some way.

So the attacker only needs to discover this rule once. And the rule is going to 
be the same sorts of things that crackers already use. You’d be surprised at 
how many passwords in the LinkedIn dataset were things like Iink3d1n

Crackers would barely be need to change their transforms to go after that.

And once they’ve cracked one of them, they will know your rule for all of them.

>>> Not a master password and not simply “site name” could be as follows:  
>>> Perform a transform on the site name, one that is easy to remember but hard 
>>> to guess.  It seems to me there could really be a lot of variations here 
>>> still.
>> 
>>> Char for letter substitutions, exclude vowels, double vowels, exclude 
>>> consonants, double constants, cases, do you include or not include the 
>>> subdomain, do you include or not include the top level domain, do you 
>>> include the (.), do you append anything to it and so on.
>> 
>> Each one of those decisions is roughly one bit (except for “what you 
>> append”). So you’ve got 8 bits in there, if you are equally likely to choose 
>> each alternative (say, flip a coin for each decision).
>> 
>> A am going to guess that if people are expected to remember scores of these, 
>> they will make the same decisions for each. So the individual remembered 
>> passwords unique for each site will not be independent of each other. (And 
>> are highly guessable).
>> 
>> Every single one of our transformations are part of the standard rules sets 
>> that come with password cracking tools such as John the Ripper or Hashcat.
>> 
> 
> Yea that’s true. I am vaguely familiar with Ripper. But there are a lot of 
> rules you can combine.  If you are doing the same for every site you can 
> combine many “rules” and have it not be that hard to remember. Are you sure 
> the possible combinations do not provide more than 8 bits?  

Please take a look at something I wrote a while back:

  
https://blog.agilebits.com/2011/08/10/better-master-passwords-the-geek-edition/

>> 
>> Except that if one of those constructed hashes is captured (as plaintext), 
>> then someone running a cracker against it can figure out what your 
>> remembered secret is. From that, they can make very very good guesses about 
>> your system for constructing those.
> 
> I am not sure I understand how they could do that unless they know what 
> system I used to create the hashed passwords in the first place, which seems 
> like a pretty big assumption to me.  Since we’ve already decided that a 
> password system like this is too cumbersome and inconvenient to become mass 
> market, why would a random person assume that I used such a system in the 
> first place?

Using a password generation scheme that is unlike what other people use does 
offer you some protection against non-targeted attacks. But if your scheme only 
works as long as few people use it, then you shouldn’t be advocating it.

And as for more targeted attacks, you’ve already described your scheme in a 
public place.

I like to take a Kantian approach to password generation schemes: They should 
remain good even if lots of people use it. Offering advice that becomes bad if 
people actually follow the advice isn’t really good advice, is it?

> Because remember - my password if stored in plain text is a hash of four 
> other hashes that were stuck together and hashed.  

And remember. That is just a one-round pre-hash to an attacker. You seem to 
think that prehashing your password endows it with magical powers. So as I 
said, 

>> Even if a breached site hashes their passwords, a cracker who suspects that 
>> you are using your system will just tune their hash cracker to first run 
>> through your hashing and then the sites.
>> 
>> That is, if you have 
>> 
>> user knows P, a low entropy password for site i
>> 
>> User: prehash := Hc(P)   // Hs is client’s hashing scheme
>> User: Send prehash to server as password
>> Server: h := Hs(prehash)// Hc is server’s hashing scheme
>> 
>> then a password cracker just need to run their guesses, P’, through 
>> Hs(Hc(P’)) == h
>> 
>> It really isn’t any more trouble than what they already do in password 
>> hashing. Your remembered constant PIN and your very low entropy remembered 
>> site specific password remain easily crackable. And once one is cracked, the 
>> PIN is revealed and large parts of the user’s “memorable scheme” is revealed.
>> 
>> The only advantage of the prehash is if a site stores passwords unhashed 
>> (which happens). In the paper that I pointed to, they do 

Re: [cryptography] Hi all, would like your feedback on something

2015-12-20 Thread Jeffrey Goldberg
On 2015-12-20, at 4:33 AM, Brian Hankey  wrote:

> Let me make sure that I have been clear about what I propose,

Thank you. I may very well have entirely misunderstood what your system did, as 
reading a bunch of PHP and JavaScript embedded within some HTML really 
communicate things clearly.


> because this is as much about how to easily remember the unique passwords as 
> it is about the amateurish demo we made…  You have four inputs to this 
> algorithm:
> 
> 1) String 1: This can be anything but I propose an easy way to remember 
> something that is unique.

Ah, so you want the user to remember something specific for each site.

> Not a master password and not simply “site name” could be as follows:  
> Perform a transform on the site name, one that is easy to remember but hard 
> to guess.  It seems to me there could really be a lot of variations here 
> still.

>  Char for letter substitutions, exclude vowels, double vowels, exclude 
> consonants, double constants, cases, do you include or not include the 
> subdomain, do you include or not include the top level domain, do you include 
> the (.), do you append anything to it and so on.

Each one of those decisions is roughly one bit (except for “what you append”). 
So you’ve got 8 bits in there, if you are equally likely to choose each 
alternative (say, flip a coin for each decision).

A am going to guess that if people are expected to remember scores of these, 
they will make the same decisions for each. So the individual remembered 
passwords unique for each site will not be independent of each other. (And are 
highly guessable).

Every single one of our transformations are part of the standard rules sets 
that come with password cracking tools such as John the Ripper or Hashcat.


> You could even add more and still be easy to remember… add D0G to the end if 
> the service starts with a vowel and C@T if it begins with a constant, nothing 
> if it’s a number.  If we talk about www.gmail.com we could get something like:
> MoClIaMgWwW  or gmAIl or LiaMG or WWW.gMaIl.COM and so on and so forth. 
> Perhaps even running *just* this through a hash and then ensuring an upper a 
> lower and a special char would be sufficient alone to be much more secure 
> than any average password?

Except that if one of those constructed hashes is captured (as plaintext), then 
someone running a cracker against it can figure out what your remembered secret 
is. From that, they can make very very good guesses about your system for 
constructing those.

Even if a breached site hashes their passwords, a cracker who suspects that you 
are using your system will just tune their hash cracker to first run through 
your hashing and then the sites.

That is, if you have 

user knows P, a low entropy password for site i

User: prehash := Hc(P)   // Hs is client’s hashing scheme
User: Send prehash to server as password
Server: h := Hs(prehash)// Hc is server’s hashing scheme

then a password cracker just need to run their guesses, P’, through Hs(Hc(P’)) 
== h

It really isn’t any more trouble than what they already do in password hashing. 
Your remembered constant PIN and your very low entropy remembered site specific 
password remain easily crackable. And once one is cracked, the PIN is revealed 
and large parts of the user’s “memorable scheme” is revealed.

The only advantage of the prehash is if a site stores passwords unhashed (which 
happens). In the paper that I pointed to, they do use PBKDF2 in the client 
hashing, which helps some in that case.


> 2) String 2: This can be anything also but I propose some kind of a number 
> (or PIN) that could be the same or could have some additional trick added to 
> it such as 1010 + (1 if site starts with vowel or 2 if site starts with 
> consonant). You can be as fancy or simple here as you like but there are 
> still things you can do to make it not the same every time while still easy 
> to remember. Balance between convenience and security as you wish.

Again, I think the same objecting applies.


> 3) Special char (optional): Everybody can remember one favorite special char 
> easily. (Perhaps this field could be eliminated though, read further).

Being generous, lets say that people are equally like to pick among 10 special 
characters (I’ll bet that “*” and “!” will actually show up more often). That 
give you another 3 bits. And again, once discovered once, it is discovered for 
all passwords.


> 4) Version: defaults to 1 but nothing would stop a user from starting at any 
> other number such as a meaningful date or some other obvious thing. The point 
> here is that by incrementing the number you can deal with forced password 
> changes but keep the rest the same. Probably the least convenient of part of 
> this whole scheme but that’s one of the many reasons I sought out some input. 
>  In any event I use a LOT of different services and don’t find this to be so 
> common an occurrence, perhaps 

Re: [cryptography] LastPass have been hacked, so it seems.

2015-06-16 Thread Jeffrey Goldberg
[Disclosure: I work for AgileBits, the makers of 1Password]

On 2015-06-16, at 10:53 AM, John R. Levine jo...@iecc.com wrote:

 Are there any password managers that let the user specify where to store a
 remote copy of the passwords (FTP server, scp, Dropbox, whatever) while 
 keeping
 the crypto and the master password on the end devices?

With 1Password the answer is technically “yes”, but in practice it is more of
“sort of”.

If you are just using 1Password on desktop machines, then you can sync however
you wish using anything that will look like a filesystem.

But when you need to sync with 1Password on mobile devices the choices are
reduced because 1Password doesn’t get to see a normal filesystem. For “cloud”
based synching, there is Dropbox and iCloud on iOS and Dropbox on Android.

However, there is a local “wifi sync” mechanism that lets you sync between
desktop and mobile over a local wifi network.

 Seems to me that would limit the cloudy trust problem while still addresssing
 the very real problem of a zillion accounts used from multiple devices.

Genuine efficient and reliable sync is hard. We’ve worked so that as much sync
and conflict resolution can happen on fully encrypted data so that the slow
part can be done even when 1Password is locked. But some conflict resolution
has to wait until the user unlocks one password.

At any rate, we never have any of your data in any form whatsoever. Our goal as
been “we can’t lose, use, or abuse” data that we don’t have.

However to make synching work smoothly, we do end up strongly encouraging the
use of Dropbox, but at the same time we’ve designed 1Password with the
expectation that attacks will capture your encrypted data one way or the other,
and that sync services (and your own hard drives) can be compromised.

I should point out that while we get some very nice security properties by not
being a service you log into (your master password is only ever used for
encryption), it does mean that we can’t offer some of the flexibility that
something like LastPass can.

Cheers,

-j



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-22 Thread Jeffrey Goldberg
On 21 Mar 2015, at 22:24, Lee wrote:

 On 3/21/15, Jeffrey Goldberg jeff...@goldmark.org wrote:

 (1) the file isn't secret

 But the fact that I'm using it as my one-time pad is.  Why isn't that
 good enough?

As others have already answered, your key is knowledge of which
publicly available file to use as the pad. But for a OTP to have
the security that an OTP offers the key must be a long as the message
itself. Your key is much shorter.

Just as with using a PRNG to generate a pad, you are using a short
key to generate/identify a long pad. Your system can be no more
secure than the size of your key. (The size of what must be kept
secret.) Remember, you aren't keeping the file secret; you are keeping
the name of the file secret. So it is a short key.

 (2) the file isn't random.

 Right.  An ISO file is a bad choice - too many zeros  machine code
 isn't very random.  But what about something like an MP3, OGV or some
 other compressed file?

Again, no. If you want the security properties offered by an OTP,
the pad/key must be truly random. So if you need a pad that is
a million bytes (eight million bits) long, then the particular pad
you use must be no more likely than any other string of eight millions
bits.

 I'm sorry to pick on you, but you've illustrated a point I tried to make
 earlier. The OTP is a simple idea that is remarkably easy for people to
 misunderstand.

 It doesn't feel like you're picking on me - I appreciate the feedback :)

Great.

A point I've been making is that the OTP (and other systems) are brittle.
By this I mean that if you don't follow the rules to letter you can end
up with a system that is extremely weak. A small variation on the protocol
can lead to catastrophic results.

Any simulation of a OTP that isn't a OTP itself will not have the security
properties of an OTP. And any simulation that is not designed very carefully
will end up being far weaker than the actual cryptographic systems we have
today.

So remember, one of the requirements of a OTP is that the key itself (the
stuff that you need to keep secret) must be as long as the message. When I
say that the key must be kept secret, I mean the key/pad itself. Not the 
identity of the key/pad.

Another property is that the key/pad must be truly random. Appearing random
is not enough. It must truly be random.

And yet a third requirement is that the pad never be reused.

Break any of those rules, and you not only no longer have a OTP,
but you probably have something that is easily broken.

There are good crypto systems in use which generate pseudo-random
pads from keys that are 128 (or 256) bits in length. But these are
– at best – no better than the length of their keys.

Cheers,

-j

smime.p7s
Description: S/MIME digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-22 Thread Jeffrey Goldberg

On 22 Mar 2015, at 9:48, Michael Kjörling wrote:

On 22 Mar 2015 09:36 -0500, from jeff...@goldmark.org (Jeffrey 
Goldberg):

There are good crypto systems in use which generate pseudo-random
pads from keys that are 128 (or 256) bits in length. But these are
– at best – no better than the length of their keys.


Which is, admittedly, _quite good enough_ for almost any _practical_
purpose that an individual is likely to face.


Oh, absolutely. I am perfectly happy with 128 bit keys.

Indeed, I'm very much on record in defending 128 bit keys in
the face of customer demand for 256 bits.

 
https://blog.agilebits.com/2013/03/09/guess-why-were-moving-to-256-bit-aes-keys/


I was just to distinguish between perfect secrecy and
everything else (without going into any discussion of asymptotic
security). I think that people who first learn about the OTP
are infatuated with perfect secrecy, and fail to what is really
involved.

Although I sympathize with Greg Rose's lament that we are beating
a long dead horse, I think that it is worthwhile to try to understand
why so many people seem to learn (something) about the OTP and then
badly reinvent stream ciphers. And I want to kill off the meme that
is popular in some circles that the only unbreakable cipher is the
OTP.

And so I see it as a teaching moment. Thus if I may repeat
what others have said, I too recommend Dan Bonah's on-line
Cryptography course to Lee and anyone else who doesn't immediately
see why we all so emphatically screamed No to these OTP modifications.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-21 Thread Jeffrey Goldberg
[Apologies for quoting badly]

No!  A thousand times no. 

(1) the file isn't secret
(2) the file isn't random. 

I'm sorry to pick on you, but you've illustrated a point I tried to make 
earlier. The OTP is a simple idea that is remarkably easy for people to 
misunderstand. 



Sent from my iPhone

 On Mar 21, 2015, at 3:13 PM, Lee ler...@gmail.com wrote:
 
 On 3/20/15, Michael Kjörling mich...@kjorling.se wrote:
 On 20 Mar 2015 15:11 -0400, from kevinsisco61...@gmail.com (Kevin):
 I was tempted by the promise of software to run a one-time pad on my
 machine.  I am a fool and I fall upon my own sword.
 
 An unauthenticated one-time pad is trivial to implement; it's
 literally a few lines of code in any reasonably modern language, and a
 handful of lines of code in less modern ones.
 
 The hard part, as has been pointed out in this thread, is to generate
 and handle the _pad_.
 
 Would a commonly available large binary file make a good one-time pad?
 Something like ubuntu-14.10-desktop-amd64.iso12 maybe..
 
 Regards,
 Lee
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-20 Thread Jeffrey Goldberg
On 2015-03-20, at 1:24 PM, stef s...@ctrlc.hu wrote:
 On Fri, Mar 20, 2015 at 06:12:31PM +, Dave Howe wrote:
 Or a reasonably clever and trolling satire on snakeoil products. :)
 
 the less optimistic alternative is this being a well-crafted water-holing site
 targeted at the members of this mailing-list.

Szia Stef,

I believe I’ve also seen this raised on sci.crypt, which is
spectacularly easy to troll.

I really WANT to believe it is a deliberate troll-like thing. But
the sad fact of the matter is that a huge number of people who
learn a little about the OTP think that they can create unbreakable
crypto, and they end up

(1) Using a crappy PRNG.
(2) Seeding/keying their crappy PRNG badly.
(3) Failing to notice/address the malleability of these things.
(4) Reusing the key/pad.

So whether a troll or not, that is the kind of snake oil that people
sincerely produce.

I like using the OTP as an example of how brittle some schemes are. Doing
things “slightly” wrong can lead to dramatic reductions in security.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Javascript Password Hashing: Scrypt with WebCrypto API?

2015-03-13 Thread Jeffrey Goldberg
On Mar 13, 2015, at 8:43 AM, Solar Designer so...@openwall.com wrote:

 On Thu, Mar 12, 2015 at 10:57:47AM -0600, Jeffrey Goldberg wrote:
 2. Use SHA-512 in PBKDF2
 
 This will make PBKDF2 resistant to GPU based cracking efforts.
 Note that this is resistance to attacks using current, off-the-shelf, 
 hardware. It is only a short term solution.
 
 I think this wording is too strong.  While I did and I continue to
 advocate SHA-512 over SHA-256 for this reason (when someone insists on
 PBKDF2 or the like anyway), the gap with recent attack implementations
 is narrower than it used to be.

Ah, so the term of this “short term solution” is already expiring.

 For sha512crypt vs. sha256crypt, it's
 down to ~2x:
 
 https://hashcat.net/misc/p130_img/changes_v130.png

Interesting. Thank you for that, Solar.

 And scrypt even at fairly low settings is likely somewhat stronger (or
 rather not-as-weak) against GPU attacks than PBKDF2-HMAC-SHA-512 at
 comparable low running time.  Not at settings as low as Litecoin's 128 KB
 with r=1, but at settings like 2 MB with r=8, which is affordable in
 JavaScript.

OK. So I guess we return to the original question, does anyone know of
an scrypt implementation in JavaScript?

 BTW, given the wide availability of scrypt altcoin ASICs, some of which
 can handle higher N (this is known) but likely not higher r (this is a
 plausible guess, given the incentive model for those ASICs), and given
 the effect r has on scrypt speeds on GPU, I recommend that scrypt
 paper's recommended r=8 (rather than altcoins' typical r=1) be used.
 That's even when the original reason for using r=8 (reducing the
 frequency and thus performance impact of TLB misses, and allowing for
 some prefetching) does not apply, like it mostly does not with
 JavaScript.

Thanks!
 
 (Of course, someone may produce more capable scrypt ASICs.)


Indeed. As I said, in this race the attacker has more to gain from Moore’s
Law than the defender.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SRP 6a + storage of password's related material strength?

2015-03-13 Thread Jeffrey Goldberg
On Mar 13, 2015, at 3:25 AM, Fabio Pietrosanti (naif) - lists 
li...@infosecurity.ch wrote:

 SRP is a very cool authentication protocol, not yet widely deployed, but
 with very interesting properties.

Indeed it is.

 I'm wondering how strong is considered the storage of the password's
 related material strength?

As others have said, these are separate properties. SRP is a independent
of the KDF. It does not solve or address the problem of password cracking.

 I mean, from a passive/offline brute forcing perspective, how can be
 compared scrypt vs. SRP's server-side storage of passwords?

As others have said, this is like comparing AES with PBKDF2. They
address different problems.

 Does anyone ever considered that kind of problem?

Yes. I have, but nothing written up yet.

One (of several) advantages of SRP is that the password is never
sent as plaintext to the server. Thus, it reduces the scope of the server
from capturing the password. So it makes it harder for the server to
“be evil”.

So this may still a worth while thing for you to pursue, even if it does’t
solve the fact that you are storing stuff that needs to be kept secret
because it can be cracked.

Also note, that if you are delivering the SRP routines to the client
in a web browser, then this gains you nothing. As a compromised
server could just deliver malicious JavaScript.  That is, your delivery
system is vulnerable to the same attacks that you are trying to
defend against by using SRP.

 Because SRP protocol is cool, but i'm really wondering if the default
 methods are strong enough against brute forcing.

Forgive the repetition of what I and others have said: SRP has nothing
to say about brute forcing.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Javascript Password Hashing: Scrypt with WebCrypto API?

2015-03-12 Thread Jeffrey Goldberg
Leaving aside the “Crypto in JS delivered over the web: Don’t do it”, I will 
offer
a couple of suggestions.

 at GlobaLeaks we're undergoing implementation of client-side encryption
 with server-side storage of PGP Private keys.

I understand why you are looking for ways to make this less scary. As you
probably know, this makes your servers a very juicy target.

 Obviously the hashing to be used for storing such PGP private keys has
 to be strong enough, with valuable key-stretching approach.

Yes. Though I’m not sure that hashing will be enough.

 We're now considering using Scrypt with some finely tuned parameters,
 but we've concern regarding it's performance in the browser as a JS
 implementation.

Yes. That is going to be a problem (I will offer an alternative approach
below)

 PBKDF2 is available from WebCrypto API

I don’t know that your time-line is, but it I believe that only
Chrome Canary actually implements this at the moment.


 and, as far as i read and
 understand but i'm not that low-level-crypto expert, is used internally
 to scrypt.

Although scrypt makes some use of PBKDF2, you won’t be able to
simply build scrypt out of PBKDF, nor will you be able to build 

 Does anyone know of any scrypt implementation that try to leverage the
 WebCrypto API?

Even if you need the whole client-side crypto delivered in the browser,
I don’t think that you will find scrypt in JS useful, as the performance means
that you will not be able to put set parameters in a way that will thwart the
kinds of attackers that you can expect.

So I’m going to make multiple proposals that can be adopted independently
of each other.

1. Split password hashing between server and client.

Have the client do as many PBKDF2 rounds as you can get away with, and then
use the result of that as a input to your use of scrypt server side.

2. Use SHA-512 in PBKDF2

This will make PBKDF2 resistant to GPU based cracking efforts.
Note that this is resistance to attacks using current, off-the-shelf, 
hardware. It is only a short term solution.

3. Use a second factor.

Client side, you can combine the processing of the user’s password
with some data from some second factor (stored in a file on a USB
device or the like). Of course if they lose that data, they will be locked
out forever.

This is really the thing that will make it impossible for attackers
who get copies of your stored data to be able to decrypt what you
have stored.

A couple of notes:

Things like PBKDF2 and scrypt will never protect you from
well-resourced attackers. This is because the cost to both the
defender and the attacker are of the same order. And so, that gives
the advantage to those who can through more resources at the task.
As computing gets cheaper, the advantage shits towards the attacker.
This is unlike what we have with security factors of other sorts of things,
where the work needed by the attacker rises exponentially compared to the
polynomial cost to the defender.

You are correct to want client-side crypto, but because you are delivering
the crypto from the web, you are not providing the benefits of client side
crypto. Someone who gains control of the server you deliver the JS from
or gains control in transmission can deliver a malicious client to the user
and capture everything they need.

This isn’t about JavaScript, but it is about how easy it is for an attacker
(or you) to provide a malicious client to your users.

Cheers,

-j




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] OT: SORBS is not censorship [Was: John Gilmore: Cryptography list is censoring my emails]

2015-01-01 Thread Jeffrey Goldberg
On Dec 31, 2014, at 5:16 AM, John Young j...@pipeline.com wrote:

 http://cryptome.org/2014/12/gilmore-crypto-censored.htm

I would say that I am sorry for this off-topic rant, but if I were sufficiently
sorry, I wouldn’t be sending it.

I used to be a postmaster for a medium-sized university and
worked as a consultant helping small and medium sized organizations
set up their email. I’ve even managed to piss off enough spammers to
have been targeted for attacks by them. (This is all long ago.)

A DNS-based Real Time Blocking List (RBL), like sorbs.net, does not do any
censorship or blocking itself. It merely publishes a list of IP addresses
based on published criteria. Individual email administrators set up their 
systems
to consult that list and take action as they see fit.

So if 209.237.225.253 is listed in SORBS (unless it is listed in error, which
can also happen) then it means at least some of the criteria for listing. This
can include spam being repeatedly sent from it (with the owner’s consent or 
otherwise), or hosting pages advertised by spam.

Typically an attempt has been made to contact the system or network
admin.

Now postmas...@example.com may chose to not accept SMTP connections
from such parts of the network. Alice is free to send or support spam
on her part of the network, and Bob is free to refuse to accept SMTP connections
from Alice’s bit of the network. 

Now some sites and networks offer “bullet proof hosting”. That is, the network
admins and hosting providers there simply send abuse reports to /dev/null. As
a consequence a great deal of net abuse comes from such portions of the network.

A postmaster using SORBs, Bob, is no more censorship than running a firewall is.

If Alice thinks that her network is listed incorrectly (does not meet the SORBS
listed criteria) there is a process for reporting that to SORBS.

If she thinks that the SORBS listing criteria should not be used for blocking
SMTP connections in general, then she can contact Bob, who chose to
block based on SORBS listing. In this case Bob is the administrator of
mail1.piermont.com.

If Alice thinks that an exception should be made for her system or email, she
can ask Bob to whitelist her otherwise blacklisted net.

Cheers,

-j

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is KeyWrap (RFC 3394) vulnerable to CCAs?

2014-12-24 Thread Jeffrey Goldberg
Following up on my own question:

 On Dec 24, 2014, at 3:44 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 
 My big question whether use of Key Wrap (RFC 3394) is recommended or not.

If I want provable security, then I should use a generated AEAD construction, 
but there
is nothing known to be wrong with Key Wrap.

 My intuition is is that the integrity check (see section 2.2.3 of 
 http://www.ietf.org/rfc/rfc3394.txt )
 does more harm then good in providing necessary integrity checks.

My intuition was wrong. This is designed to prevent adaptive CCAs. (Though I 
still don’t fully
understand how).

 I assume that this has been discussed somewhere, but my Google-fu is failing 
 me today.
 Pointers to the literature would be welcome.

And the exact paper has already been written:

@incollection{rogaway2006provable,
  title={A provable-security treatment of the key-wrap problem},
  author={Rogaway, Phillip and Shrimpton, Thomas},
  booktitle={Advances in Cryptology-EUROCRYPT 2006},
  pages={373--390},
  year={2006},
  publisher={Springer}
}

As I see it from that paper the advantages of a key-wrap scheme over using a
generic AEAD scheme is that

(a) it may be lighter weight in computation and size of ciphertext
(b) Defends against “IV misuse”.
(c) RFC 3394 has been around for a while and is widely available

Cheers,

-j

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OneRNG kickstarter project looking for donations

2014-12-15 Thread Jeffrey Goldberg
On 2014-12-15, at 1:18 PM, ianG i...@iang.org wrote:

 https://www.kickstarter.com/projects/moonbaseotago/onerng-an-open-source-entropy-generator


Although I’ve got some quibbles with the description, I was more than happy to 
back this.

Before I get to those quibbles, I will talk a bit out why I enthusiastically am 
backing this project.

I work for a company that makes a consumer-oriented password manager. We need 
to generate a number of cryptographic keys, and on OS X and Windows we rely on 
the CSPRNGs provided by those
OSes. (We do our own version of HKDF when generating master keys, but still are 
using the OSes CSPRNGs).

After BULLRUN, we took a look at all of the crypto that we use with an eye to 
whether there was a possibility of it having a backdoor or being deliberately 
weakened. The only primitives that we were using were AES and SHA-2, and so 
remained confident that neither the algorithms nor the implementations could be 
backdoored in a way that could remain undetected. (Because of how we use these, 
things like timing attacks and other side-channel attacks are not relevant.)

The exception, of course, is with the system CSPRNGs. It is just hard know that 
they are behaving as advertised. Perhaps when I ask for 16 random bytes, I’m 
only getting 64 bits of entropy. (Of course the system can’t be too biased 
without that being eventually detected).

Anyway, so I love the idea of having something like this. I can combine data 
from this sort of device with data from system’s CSPRNGs (possibly using HKDF 
or even a simple XOR) and be guaranteed something that is at least as strong as 
the strongest of the two. (I might have to look at what kinds of processes 
might be able to snoop on data retrieved from the USB device in userland.)


Now some minor quibbles of presentation.

 What we do know is that the NSA has corrupted some of the random number 
 generators in the OpenSSL software we all use to access the internet,

To my knowledge it is only one PRNG, and while “one” can be considered “some” 
it is a bit misleading. But more importantly that one never actually got used 
on OpenSSL. It turns out that there was an implementation bug that rendered 
Dual_EC_DRBG completely unusable in OpenSSL. Because it was such a poor choice 
to use anyway, nobody even noticed this until people started to test it after 
the BULLRUN disclosures.

As far as anyone knows, it seems like only the users of RSA Inc’s BSafe crypto 
library where ever actually subject to the sabotage.

 and has paid some large crypto vendors millions of dollars to make their 
 software less secure.

Again, we have the instance of the deal with RSA Inc to make Dual_EC_DBRG the 
default in BSafe. While there may be other such deals that we don’t know 
anything about, that is the one in which there is a smoking gun (and bloody 
hands, and finger prints). I find it deliciously ironic that many (most?) of 
RSA Inc.’s customers are those doing military contracting for the US.

I’m not at all trying to say, “well, it was just that once”. After all, what 
we’ve learned from this is what the NSA is willing to do to subvert 
cryptographic tools. And we know from BULLRUN about the existence of “working 
with our industry partners”, but we are left frustratingly blind as to what 
that actually means.

So I fully agree that what the BULLRUN revelations mean is that the government 
never actually surrendered at the end of the Crypto Wars. Instead they 
pretended to, but went on fighting underground.

 Some people say that they also intercept hardware during shipping to install 
 spyware.

Although I believe that such intercepts and implants do happen, I react badly 
to “Some people say …”  It’s the kind of phrase that at least in the US is 
followed by things “… Obama is plotting to outlaw Christianity”. “Some people 
say …” is use all to often to start rumors without ever being accountable.

I would replace “Some people say” in your notice with “There is reason to 
believe”. (There is reason to believe.)

Again, I am fully supportive of the goals and the reasons for this project. I 
just have quibbles about the text that I have probably gone on about too much.

Cheers,

-j

smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Question About Best Practices for Personal File Encryption

2014-08-16 Thread Jeffrey Goldberg
On 2014-08-16, at 4:51 PM, David I. Emery d...@dieconsulting.com wrote:

 On Sat, Aug 16, 2014 at 04:21:53PM -0500, Christopher Nielsen wrote:
 The comment about Apple is simply false. Apple does not have a key to
 FileVault2 unless you escrow your key with them. I know this because a dear
 friend recently passed, and his family was not able to gain access to his
 encrypted drives through Apple.
 
   You may be right or may not, but I certainly have to think that
 if there is a backdoor password to Filevault2 it is quite likely that
 Apple would not choose to disclose that fact to just some random user
 who had lost files due to forgotten passwords.

Right. We don’t know whether Apple escrows the key in the absence of
people asking them to, but we do know that they do offer to store a
“recovery” key when someone sets up FileVault2.

So an instance of Apple being able to help someone recover their FileVault2
data proves absolutely nothing.

I have spoken to people who specialize in forensics recovery for Apple
products and who have close relations to Apple. Those conversations lead
me to believe that there is no backdoor that they are aware of. Of course,
if there were, they would not reveal that information to me.

I do think, however, that if there are such backdoors, it would have
to be known to only a very small number of people. Too many of the people
who work on Apple security would blow the whistle. So it would have to
be introduced in such a way that most of the people who actually develop
these tools are unaware of the backdoors. It’s certainly possible, but
it does shift balance of plausibility.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How big a speedup through storage?

2014-06-20 Thread Jeffrey Goldberg
On 2014-06-19, at 10:42 PM, Lodewijk andré de la porte l...@odewijk.nl wrote:

 With common algorithms, how much would a LOT of storage help?

Well, with an unimaginable amount of storage it is possible to shave a few bits 
off of AES. 

As {Bogdanov, Andrey and Khovratovich, Dmitry and Rechberger, Christian} say in 
Biclique Cryptanalysis of the Full AES (ASIACRYPT 2011) [PDF at 
http://research.microsoft.com/en-us/projects/cryptanalysis/aesbc.pdf ]

This approach for 8-round AES-128 yields a key recovery with computational 
complexity about 2^125.34, data complexity 2^88, memory complexity 2^8, and 
success probability 1.”

It’s that 2^88 that requires a LOT of storage. I’m not sure if that 2^88) is in 
bits or AES blocks, but let’s assume bits. Facebook is said to store about 2^62 
bits, so we are looking at something 2^26 times larger than Facebook’s data 
storage.

 I know this one organization that seems to be building an omnious observation 
 storage facility,

Any (reliable) estimates on how big?

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How big a speedup through storage?

2014-06-20 Thread Jeffrey Goldberg
On 2014-06-20, at 4:30 PM, grarpamp grarp...@gmail.com wrote:

 On Fri, Jun 20, 2014 at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org
 wrote:

 As {Bogdanov, Andrey and Khovratovich, Dmitry and Rechberger, Christian}
 say in Biclique Cryptanalysis of the Full AES (ASIACRYPT 2011) [PDF at
 http://research.microsoft.com/en-us/projects/cryptanalysis/aesbc.pdf ]

 This approach for 8-round AES-128 yields a key recovery with computational
 complexity about 2^125.34, data complexity 2^88, memory complexity 2^8, and
 success probability 1.”

8 rounds, lot more to go.

It looks like I quoted from the wrong part of the paper. Take a look at Table
1. They claim that for 10 round, AES-128, The data is 2^88 (and now that I re-
scanned the paper, these are plaintext-ciphertext pairs, so 32 bytes per pair),
computations are 2^126.18 and memory is 2^8.

It’s not clear to me whether all of those 2^88 plaintext-ciphertext pairs need
to be stored. So this might not actually be a storage issue. Just a boatload
of oracle calls.

(I hope it is clear that I do not think of this as anything like a practical
threat to AES. I had just remembered this paper, with its enormous data
requirements when I saw original question.)

 Any (reliable) estimates on how big?

 $10M in drives at consumer pricing will get you a raw 177PB, or 236PB at
 double the space and power. Or $1B for 17EB. Budget is an issue.

As always, let’s go with the high estimate in the hands of the attacker. We
are still far far short of the storage requirements for this particular attack
(and all for less than a 2-bit gain).

So I think that it is safe to say that all that data storage is not an attempt
to use the particular attack I cited.

Cheers,

-j

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson - the definition of trust

2014-05-05 Thread Jeffrey Goldberg
On 2014-05-05, at 1:12 PM, pjklau...@gmail.com pjklau...@gmail.com wrote:

 -Original Message-
 From: Jeffrey Goldberg [mailto:jeff...@goldmark.org] 

 Just because you are talking to the right IP address doesn't mean
 you are talking the right host.
 
 You're right yes ( I did forget :), but if a DNS can somehow guarantee a
 correct hostname-IPAddress mapping, then it can also guarantee a correct
 hostname-public key ( or self signed certificate) mapping.

Ah. OK. Thanks for spelling that out for me. Now it makes sense.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson - the definition of trust

2014-05-04 Thread Jeffrey Goldberg
On 2014-05-03, at 3:22 AM, pjklau...@gmail.com pjklau...@gmail.com wrote:

 Frankly, if we could trust in DNS, we would not need to trust in
 web-PKIX [2] - since the one is just the bandaid for the other.

Have you forgotten that routing can be subverted?

Just because you are talking to the right IP address doesn’t mean
you are talking the right host.

Cheers,

-j

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-05-01 Thread Jeffrey Goldberg

On 2014-05-01, at 8:49 PM, ianG i...@iang.org wrote:

 On 1/05/2014 02:54 am, Jeffrey Goldberg wrote:
 On 2014-04-30, at 6:36 AM, ianG i...@iang.org wrote:

 OK. So let me back peddle on “Ann trusts her browser to maintain a list of
 trustworthy CAs” and replace that with “Ann trusts her browser to do
 the right thing”.
 
 Right, with that caveat about choice.

I think that we are in fierce agreement. At first
I didn’t understand the significance of your insistence
on *choice*, but I see it now. More below.

 In this context, we would claim that users b-trust because they know
 they can switch.  With browsers they cannot switch.
 
 Their choice is to transmit private information using their browsers.
 Their choice is to not participate in e-commerce.

 Right, there is always in economics some form of substitute.  But
 actually we've probably moved beyond that as a society.

 I would say that e-commerce is utility grade now, so it isn't a
 choice you can really call a choice in competition terms.

I agree that the behavior in b-trust must be about “choice behavior”
in that Ann behaves one way instead of another.

But I don’t think that we should have some minimal threshold of choice
before can call the behavior b-trust. As long as there is some
non-zero amount of choice the behavior (in these cases) will exhibit
a non-zero amount of trust.

For me the sentence, “I had little choice but to trust X” is perfectly
coherent.

Is it possible that you are letting your righteous anger at what
browser vendors have done interfere with how you are defining “trust”?

 All I’m asking is that we consider the people we are asking to
 “b-trust” the system. Can we build a system that is b-trustworthy
 for the mass of individuals who are not going to make c-trust
 judgements.
 
 
 Right, this is the question, how do we do that?
 
 That is what Certificate Transparency and Perspectives seek to do, as
 well as other thoughts.  First they make the c-trust available by
 setting up alternate groups and paths. Then the c-trusters develop their
 followings of b-trusters.

I agree with that last bit. In a sense, if people see that experts trust
the system they will too. But how will this play out with Certificate
Transparency for most users? What do they actually need to know and do
to follow some c-trusters?

 There likely needs to be a group of c-trusters in the middle
 that mediate the trust of the b-trusters.

And how will that work without putting unrealistic expectations on
the vast major of users. How do they pick which c-trusters to trust?

 I think that we have a higher chance of success if we use a language that
 can talk about agents who do not have a deep or accurate understanding of
 why a system is supposed to work. And so, I think that, with some refinement,
 my notion of b-trust is worthwhile.
 
 
 Yes it could be.  It might not be applicable to web-PKI because the
 vendors confuse X do the right thing by users with X' maintain a good
 CA list.”

I’m confused. (Perhaps by the vendors?)

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-30 Thread Jeffrey Goldberg
On 2014-04-30, at 6:36 AM, ianG i...@iang.org wrote:

 On 30/04/2014 02:57 am, Jeffrey Goldberg wrote:
 I have been using “trust” in a sort of behavioral way. For the sake of the
 next few sentences, I’m going to introduce some terrible terminology. 
 “b-trust”
 is my “behavioral trust” which will defined in terms of “c-trust” 
 (“cognitive”).
 
 So let’s say that A c-trusts B wrt to X when A is confident that B will act 
 in
 way X. (Cut me some slack on “act”). A “b-trusts” B wrt to X when she behaves
 as if she c-trusts B wrt to X.
 
 So when I say that users trust their browsers to maintain a list of 
 trustworthy
 CAs, I am speaking of “b-trust”.  They may have no conscious idea or
 understanding what they are actually trusting or why it is (or isn’t)
 worthy of their trust. But they *behave* this way.

 Right, but this is very dangerous.  You have migrated the meaning of X
 in the conversation.

 Users trust their browsers to do the right thing by security.
 
 Browsers trust their CAs to do the right thing by their ID verification.
 
 This does not mean that users trust their browsers to maintain a list of
 trustworthy CAs.

OK. So let me back peddle on “Ann trusts her browser to maintain a list of
trustworthy CAs” and replace that with “Ann trusts her browser to do
the right thing”.

 Trusting the browsers to do the right thing also includes the
 possibility that the browsers throw the lot out and start again.  Or
 drop some CAs from the list, which they only do with small weak players
 that won't sue them.

I am not saying that her trust is justified.

 Also, one has to again refer to the nature of trust.  It's a
 choice-based decision.  Trust is always founded on an ultimate choice.
 
 In this context, we would claim that users b-trust because they know
 they can switch.  With browsers they cannot switch.

Their choice is to transmit private information using their browsers.
Sure, you and I know that what protects most people from credit card
fraud has nothing to do with browsers, and all has to do with
regulations of the liability. But I’ve encountered plenty of people over
the last few weeks who have said that they will stop using their
credit cards on-line because of heartbleed, while they continue to use
entirely unprotected email for things that are genuinely sensitive.

Their choice is to not participate in e-commerce.

 There isn't a
 browser that will offer a different model (they cartelised in 1994,
 basically).  And there isn't a browser vendor that will take user input
 on this question.

Again, I’m not saying that the trust that the browser will do the right
thing is justified.

But the ordinary users isn’t going to curate their own list of CAs. Even
the extraordinary user is only going to tinker with these under rare
circumstances. (Indeed, a bug in Apple’s trust chain logic was only
discovered when individual users chose to distrust DigiNotar and found
that things didn’t work as documented.)

 So there is no choice for the user.

Again, people can chose to not participate in the system. It is
a choice that has a high cost. But because the alternative to trusting
the browser to do the right thing is costly, it means that people will
lower their threshold. 

 Which is where it gets more dangerous:  we can frame the question to
 gain the answer we want; but who are we framing the result for ?

All I’m asking is that we consider the people we are asking to
“b-trust” the system. Can we build a system that is b-trustworthy
for the mass of individuals who are not going to make c-trust
judgements.

 I see that you’ve written on financial cryptography. Well think about 
 conventional currency works. For all its problems currency works, and it is 
 a system that requires “trust”. But only a negligible fraction of the people 
 who successfully use the system do so through c-trust.
 
 Right.  Now add in hyperinflation to the mix;  how many people really
 trust their governments to not hyperinflate?  Only ones with no
 collective history of it.

Again, the point of that example was not to claim that people always
put the appropriate amount of b-trust into a system, but simply that
the on a day to day basis we all behave “as if” we trust things in the
c-trust sense without that understanding.

Or are you saying that we should insist that everyone develop a full
and proper understanding of the system before using it? 

I think we should recognize that the vast majority of humanity are
not as intensely curious about the particular things that we in this
discussion are curious about. I also think that they shouldn’t be
excluded from secure communication because of that.


 Right, we're certainly in the world we are in.  However, the problem
 with this particular world is that it uses a language that is
 'constructed' to appear to require this particular solution.  In order
 to find better solutions we have to unconstruct the constructions in the
 language, so as to see what else is possible.

I

Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread Jeffrey Goldberg
On 2014-04-28, at 5:00 PM, James A. Donald jam...@echeque.com wrote:

 Cannot outsource trust  Ann usually knows more about Bob than a distant 
 authority does.

So should Ann verify the fingerprints of Amazon, and Paypal herself? How do you 
see that working assuming that Ann is an “ordinary user”?

This is exactly the kind of thing I was complaining about in my earlier 
comment. There are burdens that we cannot push onto the user.

People do trust their browsers and OSes to maintain a list of trustworthy CAs. 
Sure, we might have the occasional case where some people manually remove or 
add a CA. But for the most part, we’ve outsourced trust to the browser vendors, 
how have outsourced trust to various CAs, etc.

I am not saying that the system isn’t fraught with series problems. I’m saying 
that at least it tries
to work for ordinary users.

  A certificate authority does not certify that Bob is trustworthy, but that 
 his name is Bob.

Yes, of course. Back in the before time (1990s), I had feared that this was 
going to be a big problem. That people would take the take “trust the 
authenticity” of a message to be “trust the veracity” of the message. But as it 
turns out, we haven’t seen a substantially higher proportion of fraud of this 
nature than in meatspace. I think it is because reputations are now so fragile.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread Jeffrey Goldberg
Hi Ian,

I will just respond to one of the many excellent points you’ve made.

On 2014-04-29, at 12:12 PM, ianG i...@iang.org wrote:

 On 29/04/2014 17:14 pm, Jeffrey Goldberg wrote:
 People do trust their browsers and OSes to maintain a list of trustworthy 
 CAs.
 
 No they don't.  Again, you are taking the words from the sold-model.

I will explain my words below.

 People don't have a clue what a trustworthy CA is, in general.

I emphatically agree with you. I hadn’t meant to imply otherwise.

I have been using “trust” in a sort of behavioral way. For the sake of the
next few sentences, I’m going to introduce some terrible terminology. “b-trust” 
is my “behavioral trust” which will defined in terms of “c-trust” (“cognitive”).

So let’s say that A c-trusts B wrt to X when A is confident that B will act in 
way X. (Cut me some slack on “act”). A “b-trusts” B wrt to X when she behaves 
as if she c-trusts B wrt to X.

So when I say that users trust their browsers to maintain a list of trustworthy 
CAs, I am speaking of “b-trust”.  They may have no conscious idea or 
understanding what they are actually trusting or why it is (or isn’t) worthy of 
their trust. But they *behave* this way.

A vampire bat may b-trust that its rook mates will give it a warm meal if 
necessary. Life is filled with such trust relations even where there is no 
c-trust. 

 (c.f., the *real meaning of trust* being a human decision to take a risk
 on available information.)

Which is what I am talking about. And I’m talking about it because it is what 
matters for
human behavior. And I want a system that works for humans.

I see that you’ve written on financial cryptography. Well think about 
conventional currency works. For all its problems currency works, and it is a 
system that requires “trust”. But only a negligible fraction of the people who 
successfully use the system do so through c-trust.

It may well be that all of the problems with TLS are because the system is 
trying to work for agents who don’t understand how the system works. But, as I 
said at the beginning, that is the world we are living in.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-25 Thread Jeffrey Goldberg
On 2014-04-25, at 4:09 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf

In which Peter says:

 The major lesson that we’ve learned from the history of security 
 (un-)usability is that technical solutions like PKI and access control don’t 
 align too well with user conceptual models

Exactly. If, for example, a user needs to understand the distinction between 
“trust as an introducer” versus “trust the identity of” in order to behave 
securely, then the system is going to fail.

Or as I’ve said in

http://blog.agilebits.com/2012/07/03/check-out-my-debit-card-or-why-people-make-bad-security-choices/

 when we observe people systematically behaving insecurely, we have to ask not 
 how can people be so stupid” but instead “how is the system leading them to 
 behave insecurely.”
 
I hated X.509 when it was first being introduced, and much preferred PGP’s “Web 
of Trust”. I still hate X.509 for all of the usual reasons, but I now have much 
more sympathy for the design choices. It fails at its goal of not demanding 
unrealistic from ordinary users, but at least it tries attempts to do so.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] If not StartSSL, the next best CA for individuals?

2014-04-12 Thread Jeffrey Goldberg
On 2014-04-12, at 12:40 PM, Eric Mill e...@konklone.com wrote:

 (Setting aside how awful the CA system is generally…)

I try to limit my use of profanity in writing, so have to
put that aside.

 Even if not free, I'm looking to recommend[3] something priced
 attractively for individuals and non-commercial uses. The friendlier
 the interface, and the more reliable and principled the customer
 service, the better.

I like GlobalSign.

  https://www.globalsign.com/

They are well priced for the small customer,
every interaction I’ve had with them has been great, and they’ve
been saying all the right things.

 
http://blog.globalsignblog.com/blog/important-security-advisory-blog-heartbleed-bug

They also had a really nice statement about transparency back in September,
but I can’t find it now.

I have not systematically (or even unsystematically) reviewed various CAs.
Once I found one that I liked, I stopped looking around.


Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] crypto mdoel based on cardiorespiratory coupling

2014-04-09 Thread Jeffrey Goldberg
On 2014-04-09, at 7:17 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 http://threatpost.com/crypto-model-based-on-human-cardiorespiratory-coupling/105284
 
 This is nonsense, right?

Yep.

  Unbounded in the sense of relying on secrecy of the unbounded number of 
 algorithms?

The distinction between algorithm and parameter (along with other things) seem 
muddled.

I commented on it is a few posts in sci.crypt.  Here are trimmed highlights.

Jeffrey Goldberg wrote in Message-ID:   bqe4cnft6k...@mid.individual.net:

 […]the 60 item bibliography of their paper cites only one source in 
 cryptography (and that is on quantum key exchange).
 
 Somehow the first sentence of the paper doesn't inspire confidence either:
 
 It is often the case that great scientific and technological discoveries are 
 …
 
 […]
 What I see as I glance over this paper is that people who have been caught up 
 in the fadish understanding of chaos theory see that they get PRNGs out of 
 their dynamical systems (true enough).
 
 But quite emphatically, the PRNGs that you get from most of this non-linear 
 dynamical systems are not cryptographically appropriate. Indeed, there are 
 tests that can distinguish whether the random sequences is likely to be from 
 such a system. If I understand correctly, even their noise filtering 
 component depends on exactly that technology.


Cheers,

-j


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] good key stretching practice?

2013-12-28 Thread Jeffrey Goldberg
On Dec 28, 2013, at 2:01 PM, Kevin kevinsisco61...@gmail.com wrote:

 Hello list.  What is the best key stretching method that can be used?

Best for what?

If you are trying to stretch from a password to a key and wish to add some 
resistance to password cracking then currently your “mainstream” choices are 
scrypt, PBKDF2, and bcrypt. None of those are perfect, but each will do. PBKDF2 
is the best established, but it is also the most quirky. If you want to play at 
the bleeding edge of this, you can look what has been proposed as part of the 
Password Hashing Competition. 

  https://password-hashing.net

If you don’t need a “slow” hash, then perhaps something like HKDF is right for 
your particular needs.

  http://tools.ietf.org/html/rfc5869

But without having a better sense of what you are trying to achieve, nobody can 
be confident that they are recommending the right thing to you.

Cheers,

-j



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password Blacklist that includes Adobe's Motherload?

2013-11-15 Thread Jeffrey Goldberg
On 2013-11-15, at 1:33 AM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 So if we find (and I haven’t correlated what I’m
 working on with actual passwords, so now this is hypothetical)
 that ioxG6CatHBw appears for the last block of the encryption
 of “password1”, then we know that that is the encryption of “1”
 plus padding.

Let spell that out with real data.

Jeremi Gosney of The Stricture Group has worked out that

  2fca9b003de39778d23e6fe47a8c787c

corresponds to “password1”, based on the techniques I described. (There
were 28350 instances of it in the data.) As that is a nine character
password, ending in “1” we now know that d23e6fe47a8c787c is the
encryption of “1” + padding.

So a quick (well nothing is quick with data this size) awk gives me

$ awk '$3 ~ /d23e6fe47a8c787c/ { sum += $2; ++count}; END {print sum, count}' 
password-ranked.txt 
1835669 927185

So we’ve got about 900K distinct nine character passwords that end with
“1” and these are used for about 1.8 million accounts. (I said “about”
because my matching would also hit 17, 25, … character passwords ending
in 1.)

I should point out that the kind of stuff I’m describing here was done
first (as far as I know) and more systematically by Steve Thomas,
https://twitter.com/Sc00bzT Indeed, he is the one who spotted that this
was ECB encrypted with a 64 bit block size. Adobe later confirmed that
it was 3DES. (Though if they are lying and it is actually just DES,
then hunting for the key might be worthwhile.)

At any rate, despite knowing that there are 56 million distinct
passwords in that dump, we don’t know what most of them are. So
this can’t be used to create a blacklist.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password Blacklist that includes Adobe's Motherload?

2013-11-14 Thread Jeffrey Goldberg
On 2013-11-13, at 8:13 PM, Jeffrey Walton noloa...@gmail.com wrote:

 Is anyone aware of a blacklist that includes those 150 million records
 from Adobe's latest breach?

You are aware that these haven’t all been decrypted? (Or is there some
news I’ve missed.)

The passwords were encrypted, unsalted, using 3DES in ECB mode. But
the actual encryption key is unknown.

So the way that passwords have been “decrypted” is on a case by case basis.
For example, if we have, say, 100,000 users using the same password, and
one of them credibly ‘fesses up to what their password was, then we 
know what that password was for all of those users.

These are reinforced by the fact that many of the passwords included
password hints, often simply saying what the password was.

We also can work out what some of the more popular passwords are by comparing
with other breaches. For example if al...@example.com is known to use
the password snoopy1 in both the Sony and LinkedIn breaches, and gives
the same hint in the Adobe data, that is a big clue. If we find a few dozen
other reusers that way we can say with high confidence what that particular
password is.

The ECB mode and small block size of 3DES has also been helpful. So suppose
we have about 6700 people corresponding to this password 

6682 /NpNslkFN4nioxG6CatHBw==

and 3402 corresponding to this one

3402 /FkacZU/hWrioxG6CatHBw==

Even with the base64 encoding, you can see that
the second block of each of those passwords is the same as
it encrypts to ioxG6CatHBw

(I really should convert the base64 to hex)

So if we find (and I haven’t correlated what I’m
working on with actual passwords, so now this is hypothetical)
that ioxG6CatHBw appears for the last block of the encryption
of “password1”, then we know that that is the encryption of “1”
plus padding.

Even a cursory glance at the data and you see penguins.

My project is on relative frequency of passwords, so I’m not
actually trying to figure out that plaintext. I’m interested
in relative password frequency.

Several people have noticed that the popularity of passwords
resembles a power law distribution. David Malone and colleagues
have specifically looked that this.


@article{MaloneMaher11:CoRR,
Author = {Malone, David and Maher, Kevin},
Journal = {CoRR},
Title = {Investigating the Distribution of Password Choices},
Volume = {abs/1104.3722},
Year = {2011}}

And I’ve seen similar in my own work. The “problem” is that if
the power law distribution holds up with a “big” exponent (near or above 1)
then that would indicate a situation where popularity contributes to 
popularity.

So I want the resemblance to a power law distribution to be
superficial. There are other distributions that can look similar.
Either that I want an explanation for why the popularity of
password choice would make it more attractive to others. Are people
really being influenced by their password choices by others?

I think that high password reuse might be able to account for some
of the power lawish distribution, but I haven’t quite worked that
out.

At any rate, this data dump is perfect for me. I’ve only just begun
working on it, but unsalted ECB encrypted passwords allow me to count
frequencies.

 I tried finding a list and was not successful.

There isn’t a list of these decrypted. Jeremi Gosney has, in
collaboration with others, worked out what the passwords are for
the 100 most frequent.  Troy Hunt is doing some excellent work
on correlating with other breaches.

Cheers,

-j

–- 
Jeffrey Goldberg
Chief Defender Against the Dark Arts @ AgileBits
http://agilebits.com
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] coderman's keys

2013-11-01 Thread Jeffrey Goldberg
On 2013-10-31, at 11:11 PM, coderman coder...@gmail.com wrote:

 On Thu, Oct 31, 2013 at 7:55 PM, coderman coder...@gmail.com wrote:
 my contempt for email is well known and reinforced by choice of provider.
 
 there are myriad rebuttals to email as private channel, of which i
 agree fully.  however, if you pass muster, i can be reached via secure
 email.  yes your default client will balk.  this is a feature not a
 bug...  you must be this high to ride...
 
 
 still no successful encrypted responses.

-BEGIN PGP MESSAGE-
Comment: GPGTools - http://gpgtools.org

hQMOAyheCGO7e/dQEAv+MonJWg7wyFrbCTJrQ7k4TeG6ue99TGvhZVXouiNS3o4e
joZKdq6G7DcnkBrOWbqr6dGoqPUk07HxD4SpxyNV/mm0ns0EjmPiS5AecYAu7Pul
YSY2LG7feo5gJdbCheb4l6WqEr+w2/3m14TePwH6pX31l9qaBiWJdpgDBymMVDPA
0mx8AyKp5Evwa1P+R3DVn8P8wQJYbtlhCBlgMwyfQMGnoxRuiivhjxT3gL6PcKQY
Zt1S7QTR0QTq45GxNfSuzeZpf/VdsYX1EffHkeDwMV4pzqSaSBOnY5/L+uv/ZI7G
x8pBB85xeM7C4NqjdH0fhm9aKeTh6lhn2Ano5xx04HHmj/tCwNPtsH7gChkBs9ud
qe8NZmBj+RfKMzwUoSbYxdCLAbc8jziSeweOl9nehgmtfVFCUiEZRi9rt6K2kpll
luhGSH7OnXrm+SgTLX8MQc7W+O0ZuOJhkuHabcgl+X5Ig0XiO04FHFwdhXTC3vIz
n8YX/vufZSCDu3lsVXhbDACUIoqGEwwY8wJkxCy5NDZpK+r3D+j5jiEzzNdJ8gGH
ki9MEIBtD0vfxmjEeeHuTrIKBQPeWygFB8n+sTUw76rx77Fe7b+VvM7YGIpfXf99
IUuVbDt9XYG6Xw+pLEn+l7OEPKkuJVvyew72oWUEIErH9afAs+/LRp/GFu2QN/DK
3/Tx+/5SFnzVraYEOWDIYrWB8WCEt9+m8tvl05kU/NNW/yRCOnu454LMp1jBzahd
9/Et37Ak1qKJabBL4iw1p/M8RYHbO5K8083XvS6rGc5M6k0iYyKIwmdfeq8+S/+h
x4eZiBCxoE1aMXG3qPZqRl/Z/awJj7cT2YzhX67cVz5DrJJzVUefs6zyclMbBnON
ahCpB4D8ll7jy7Iq8cP3v9d+xp+JAqErEIyrdxHrbWwIf+ogKgMwK9H6D7WYyIji
lhbTWeUvptooCILO108vRgtxkHMIZ/bpeRjhsIwgqER3C0G+3QYveAlxtqZ8HrQN
ZH269bJiVmFTH6GBSMtJTEOFAg4DZhibeqJD/S4QB/9hZut5POE/6gWRV9YmJd8G
jjEjbxxhgMZVm4KJDhoMS/b3/UZbdnlx9G6WHech6u/SEI3QQ+fqC8AUIWZfmPZw
r+4y71J42TKBuATwAoyw9ooA66aFP+M2bWYehurBhbU00dT+6bxq74ggfJaFgn7v
b6Cr6cgODrNlmnxK4Ly95qwHgA5Yt+bbtANhbo/G8W17i6uFxvABu+t+38n6wQPH
XXDspJVpcW8NCezyHyd9YLkd2Xx+c2iDWQMGvpdVhVmeJ3ITbU2I5bLBAT6MrN01
CnZ0+hYp8ZMCMshfDMFW260nJ6ijVsPBX4LFsSftsNYPitAD4lMNJ73oikXSjHyW
B/9C1tbCVTWaS3CMhBPUfWGQGKbFDKtt4jkj64KGkqEMRrnH0KXnfwCK0VDL0XBi
WGCvgYHO8N0iqdbge5xDUrfCHTvUv18U8xWaDkzk9Mqp52Idui2DpDEasCCAEUpV
EAICDV8tGQZivGoQmQP6K8Pp/05xrm8kDv1xZWjG6wdO5g71aY0KMZqryoJAc68W
aXfKfgvokcjQqteQNc+uLPc03WBob5dnwMJqOUQiMIjnKuFRvzoGumm3zQGlZI50
4W0gI9PRLNl4jQJxbGYF2Iv398pMmsbLdC37cx4D5HvHecPlcH5LD0l/Yt/zplar
yJiN6gubLtbuSCU0TF3th+7HycAgSJYrW2KzuNWl0QTJwfLJkH+kfbVY7gTB7gkA
ZXlUW/Cyzv58A6W5SxjF6OiRmTsmxvP2SWpO1+9uU4gosYJn8qQ7gcHVYTqEjtBH
4XdvdFwDuNISK8IGuqGXOFlbnlTRBmvCYCooAvt+vmj0zl55tzUXhmpOVImY2JKf
yQns38JEmSM/dTdlR5zJrcrCUFiSNghGSwLTAFwbQfGRU2P4emZYQ2BMxo4NfF2f
XLfynU3muDjG6DhI/ha9JovovXEwT7B1tckoAP2Ns0KO3V8CPBC3tOtZhQETjiuK
1Psu3NE=
=ENte
-END PGP MESSAGE-

 let's try an experiment: one bitcoin (~200$USD) to whoever
 successfully encrypts a message to my key.

That’s a serious sweetener. So I assume that I have misunderstood
something about this challenge.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-04 Thread Jeffrey Goldberg
On 2013-10-04, at 4:24 AM, Alan Braggins alan.bragg...@gmail.com wrote:

 Surely that's precisely because they (and SSL/TLS generally) _don't_
 have a One True Suite, they have a pick a suite, any suite approach?

And for those of us having to choose between preferring BEAST and RC4
for our webservers, it doesn’t look like we are really seeing the expected
benefits of “negotiate a suite”.  I’m not trying to use this to condemn the
approach; it’s a single example. But it’s a BIG single example.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-04 Thread Jeffrey Goldberg
On 2013-10-04, at 5:19 PM, Nico Williams n...@cryptonector.com wrote:

 There's a lesson here.  I'll make it two for now:
 
 a) algorithm agility *does* matter; those who say it's ETOOHARD should
 do some penitence;

Mea culpa! (Actually I never spoke up on this before)

But I do think that difficulty of implementation matters enormously
in what gets adopted. There are plenty of application developers who
will respond to too high demands with, “ah, I don’t need all of that
stuff; I’ll write my own based on Enigma.”

ETOOHARD is an errno that has a lot of impact on a lost of software
that people use, and so should be given some respect.

 b) algorithm agility is useless if you don't have algorithms to choose
 from, or if the ones you have are all in the same family”.

Yep.

And even though that was the excuse for including Dual_EC_DRBG among the
other DBRGs, doesn’t take away from the what you say.

I would add a third.

c) The set of suites need to be maintained over time, with a clear way to
signal deprication and to bring new things in. If we are stuck with the
same set of suites that we had 15 years ago, everything in there may age
badly.

Cheers,

-j

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptographic agility (was: Re: the spell is broken)

2013-10-04 Thread Jeffrey Goldberg
On 2013-10-04, at 10:46 PM, Patrick Pelletier c...@funwithsoftware.org wrote:

 On 10/4/13 3:19 PM, Nico Williams wrote:
 
 b) algorithm agility is useless if you don't have algorithms to choose
 from, or if the ones you have are all in the same family.
 
 Yes, I think that's where TLS failed.  TLS supports four block ciphers with a 
 128-bit block size (AES, Camellia, SEED, and ARIA) without (as far as I'm 
 aware) any clear tradeoff between them.

The AES “failure” in TLS is a CBC padding failure. Any block cipher would have 
“failed” in exactly the same way.

So you might be right in general, but this is not a useful example for 
illustrating your point about different kinds of block ciphers.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-03 Thread Jeffrey Goldberg
On 2013-10-03, at 1:28 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-10-04 00:13, Jeffrey Goldberg wrote:
 So unless you and Silent Circle have information that the rest of us don’t 
 about AES and SHA-2, I’m actually pissed off at this action. It puts more 
 pressure on us to follow suit, even though such a move would be pure 
 security theater.
 
 You have to get off the NIST curves.  If getting of the NIST curves, might as 
 well get off AES and SHA-2 as well.

Fair point. As we aren’t doing any public key stuff, we don’t need to hunt down 
new curves or go back to DH or anything like that. And as you say, if you are 
changing something, it isn’t too hard to chance other things at the same time.

But (and given that my previous message got MIME-mangled, I’ll repeat some 
points) the thought that Jon and Silent Circle are putting into curve 
replacement looks much more serious than the thought going into AES and SHA-2 
replacement, which reek of security theater.

I’ll grant that a priori any SHA-3 finalist will be an improvement on SHA-2, so 
really it’s the just AES move that reeks of security theater. If you are going 
to drop in a replacement for AES (same blocksize, same key sizes) then you 
should look at this as an opportunity to find the best replacement possible. 
Maybe AES with increased rounds and improved key schedule. That would have the 
advantage of taking advantage of a lot of existing hardware. Or maybe there are 
better alternatives. But picking Twofish out of a hat just seems like security 
isn’t the issue, but perception.


 If you are not using the NIST curves, the need to change is less urgent.

Agreed, but for me the “less urgent” is “next to nil”. (Beyond the existing 
reasons for moving away from SHA-2.). But fine, I acknowledge your point, and 
perhaps I’m just whining because I’m lazy and this would be a difficult change 
to implement.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-03 Thread Jeffrey Goldberg
Jon, first of all thank you for your extremely thoughtful note.

I suspect that we will find that we don’t actually disagree about much, and 
also my previous rant was driven by the general anger and frustration that all 
of us are experiencing. That is, I amy have been misdirecting my anger at the 
whole situation at you, a fellow victim.

On 2013-10-03, at 4:31 PM, Jon Callas j...@callas.org wrote:

 You might call it security theatre, but I call it (among other things) 
 protest.”

I would put it more strongly than that. I think that NIST needs to be punished. 
Even if Dual_EC_DRBG were their only lapse, any entity that has allowed 
themselves to be used that way should be forced to exit the business of being 
involved in making recommendations on cryptography. I don’t have to think that 
they are bad people or even that they could have prevented what happened. But I 
think there needs to be an unambiguous signal to every other (potential) 
standards body about what happens if you even think of allowing for the 
sabotage of crypto.

I imagine that everyone is looking at public protocols for picking curves now. 
Everyone is looking at how every step in the establishment of a recommendation 
can be made provably transparent. That is all a good thing, and it does require 
that NIST pay dearly. But it isn’t a trust issue. I don’t “trust” the NIST less 
than I trust any other standard’s body. The need to be put out of the crypto 
business as a signal and deterrent to others, but not because they are 
inherently less trustworthy.

But not using AES is a protest that hurts only ourselves. It doesn’t punish 
where punishment is needed.

 I have also called it trust, conscience, and other things including 
 emotional. I'm willing to call it marketing in the sense that marketing 
 often means non-technical.

Agreed.

 I disagree with security theatre because in my opinion security theatre is 
 *empty* or *mere* trust-building,

I still think the term is appropriate, and indeed I think that your sentence 
about conscience and emotions actually reinforces my claim that it is theater. 
But I think that it is largely a definitional question which isn’t worth 
pursuing. I’m using the term in a slightly different way than you are.

 but I don't fault you for being upset. I don't blame you for venting in my 
 direction, either. I will, however, repeat that I believe this is something 
 gentlepersons can disagree on. A decision that's right for me might not be 
 right for you and vice-versa.

Absolutely! Although I still stand by my “security theater” statement, I think 
I also mean it less pejoratively than it came across. Anyone (including me and 
the company that I work for) who has moved to 256 bit symmetric keys is 
engaging in “security theater” in my sense of the word. It’s nothing to be 
particularly proud of, but it doesn’t make us the TSA either.

 
 Since the AES competition, NIST has been taking a world-wide role in crypto 
 standards leadership.

Yep. And (sadly) that has go. As I said, they need to pay a heavy price so that 
it is absolutely clear that some behaviors are beyond the pale.

 A good standard, however, is not necessarily the *best*, it's merely agreed 
 upon.

That’s true.


  I think Twofish is a better algorithm than Rijndael.

OK. I was flat out wrong. I was ignorant of your longstanding view of ciphers. 
I’m not competent to really have an opinion about whether your judgement is 
correct there, but that isn’t relevant. I thought Twofish was pulled out of a 
hat. I was wrong. And I also apologize for accusing you of pulling Twofish out 
of hat.

 ZRTP also has in it an option for using Skein's one-pass MAC instead of 
 HMAC-SHA1. Why? Because we think it's more secure in addition to being a lot 
 faster, which is important in an isochronous protocol. 

I agree that if you are changing ciphersuites, it’s as good a time as any to 
move to a SHA-3 candidate. And as there some questions that need to be answered 
about official SHA-3, I’m happy with Skein. Again, I’m not competent to judge 
the relative merits of SHA-3 candidates.

 Silent Phone already has Twofish in it, and is already using Skein-MAC.

Ah. So yes, we are in very different starting places. Your choice seems very 
reasonable.

 In Silent Text, we went far more to the one true ciphersuite philosophy. I 
 think that Iang's writings on that are brilliant. 
 
 As a cryptographer, I agree, but as an engineer, I want options.

I think I am in a different position. I’m neither an engineer nor a 
cryptographer. I’m the guy who can kinda sorta read bits of the cryptography 
literature and advise the engineers on what to do with respect to using these 
tools. And what we decide affects the security of a very large number of users. 
So for me, the “one true ciphersuite” notion was ideal. I could pay attention 
and follow the consensus advice.  You may be competent to, say, pick Skein over 
Blake for some particular purpose, but I’m not. 

Re: [cryptography] [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Jeffrey Goldberg
On 2013-10-01, at 12:54 PM, Tony Arcieri basc...@gmail.com wrote:

 I wouldn't put it past them to intentionally weaken the NIST curves.

This is what has changed. Previously, I believed that they *wouldn’t* try to do 
something like that. Now we need to review things in terms of capability.

 That said, my gut feeling is they probably didn’t.

My exceedingly untrained intuition conforms to yours. But we do need to 
evaluate whether there are non-implausible mathematical and procedural 
mechanisms by which they could have. So the question for me is how implausible 
is it for there to be whole families of weak curves known to the NSA. I simply 
don’t understand the math well enough to even begin to approach that question, 
but …

If the NSA had the capability to pick weak curves while covering their tracks 
in such a way, why wouldn’t they have pulled the same trick with Dual_EC_DRBG? 
If they could have made the selection of P and Q appear random, it seems that 
they would have.  I know that this isn’t the identical situation, but again my 
(untrained) intuition suggests that there are meaningful similarities in ways 
they could (or couldn’t) cover their tracks.


Cheers,

-- 
Jeffrey Goldberg

smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Jeffrey Goldberg
On 2013-10-01, at 3:10 PM, Tony Arcieri basc...@gmail.com wrote:

 On Tue, Oct 1, 2013 at 12:00 PM, Jeffrey Goldberg jeff...@goldmark.org 
 wrote:
 If the NSA had the capability to pick weak curves while covering their tracks 
 in such a way, why wouldn’t they have pulled the same trick with Dual_EC_DRBG?
 
 tinfoilhatThey wanted us to think they were incompetent, so we would expect 
 that Dual_EC_DRBG was their failed attempt to tamper with a cryptographic 
 standard, and so we would overlook the more sinister and subtle attempts to 
 tamper with the NIST curves/tinfoilhat 

Well of course I’d thought of that. (I think the difference between the tinfoil 
hat crowd and the rest of us is not in what we can imagine. If we can’t imagine 
things like that, then we aren’t doing our jobs. I think the difference is 
which of our imaginings we consider to be meaningfully plausible.)

Anyway, my “answer” to that is that it would be far far better for them to 
conceal that they were sabotaging standards at all. After all, they’d earned a 
great deal of trust and respect for helping to make standards better. So unless 
they anticipated something like the Snowden leaks and were playing a very long 
(and risky game),…  it just doesn’t pan out.

Either way -- and to reiterate what we’ve all learned -- they are willing to 
sabotage at least some standards. We can’t ignore that fact when looking at 
standards and the standards process.

Cheers,

-j

-- 
Jeffrey Goldberg

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PBKDF2 + current GPU or ASIC farms = game over for passwords (Re: TLS2)

2013-09-30 Thread Jeffrey Goldberg
On 2013-09-30, at 10:43 AM, Adam Back a...@cypherspace.org wrote:

 On Mon, Sep 30, 2013 at 02:34:27PM +0100, Wasa wrote:
 On 30/09/13 10:47, Adam Back wrote:
   PBKDF2 + current GPU or ASIC farms = game over for passwords.
 
 what about stronger pwd-based key exchange like SRP and JPAKE?

Well SRP most certainly isn’t a solution to this problem. With SRP requires a 
shared secret key, so the attacker doesn’t even need to “crack a hash” after 
getting hold of a server’s password database. I don’t know enough about JPAKE 
to comment.

Of course SRP can be used in a way to ensure that the shared secret is never 
reused among services, but I don’t actually know how SRP is used in practice.

 of even more concern an attacker who steals the whole
 database of user verifiers from the server can grind passwords against it. 
 There is a new such server hashed password db attack disclosed or hushed up
 every few weeks.

They are far more common than that. See

  
http://blog.passwordresearch.com/2013/01/passwords-found-in-wild-for-december.html

Undiscovered breaches are probably much more common than hushed up breaches, 
which in turn are more common that disclosed breaches.

 You know GPUs are pretty good at computing scrypt.

I’ve been told by those who develop password cracking tools that (current) GPUs 
have a hard time with SHA512. So for the moment anyway, something like 
PBKDF2-HMAC-SHA512 should bring down the attacker-defender ratio. But this is 
hardly a long term solution and is focused on the specific architectures that 
exist today.

However, it is what I am advocating as a temporary measure until we have 
something usable out of the Password Hashing Competition. The PHC is intended 
to find (spur the development of) a successor to PBKDF2 and scrypt.

 https://password-hashing.net

 But there is a caveat which is the client/server imbalance is related to the
 difference in CPU power between mobile devices and server GPU or ASIC farms.

Yep. I work for a company that produces password manager that is used on mobile 
devices. The attacker will have much more to bring to the fight. All we can do 
is try to make the best use of the 500ms we think our users will put up with 
for key derivation. At the moment, that's PBKDF2-HMAC-SHA512 with the number of 
iterations initially calibrated to 500ms on the device where the data was 
created.

But we are stuck with this asymmetry between attacker and defender, and have to 
design with it very much in mind.

 Anyway and all that because we are seemingly alergic to using client side
 keys which kill the password problem dead.

I’m hardly allergic to that. Back in the 90s, I thought that this really would 
solve the password problem. I worked briefly on trying to initiate a project to 
develop a scheme where UK universities would sign client certs for members of 
the university. But, among other things, X.509 is a bitch.

So I'm not so much allergic as pessimistic. For a long time I thought the 
password problem would be solved within the next few years. I’ve long seen 
client keys as the solution of the future. My fear is that it will remain the 
solution of the future. (Of course, given the business I’m in, my pessimism may 
be self-serving.)

(Sure, a solution to the password problem would eliminate the need for the 
product that contributes to my livelihood, so maybe my pessimism is 
self-interested. But back when a chunk of my income was from fighting spam; I 
longed for the day I there would be no need for those services.)  

 For people with smart phones to
 hand all the time eg something like oneid.com (*) ...(*) disclaimer I 
 designed the crypto as a consultant to oneid.

Cool. I will take a look. And even in my pessimism, I don’t see passwords (even 
with great password management tools) sticking around forever. It’s just that 
I’ve learned over time that, like unencrypted email, they have a disturbing 
staying power.

Cheers,

-j



smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] One Time Pad Cryptanalysis

2013-09-26 Thread Jeffrey Goldberg
On 2013-09-26, at 1:49 PM, Michael Rogers mich...@briarproject.org wrote:

 Reuse of pads is also disastrous - VENONA made […]

Forgive me for taking this opportunity to repeat an earlier rant, but your 
example provides the perfect example.

When a one time pad is operated perfectly, it provides perfect secrecy; but 
once it is operationed with small deviations from perfection it provides 
terrible security. Things that approximate the OTP in operation do not 
approximate it in security. This is a very good reason to steer people away 
form it.

This is an example of why we need to pay attention to how easy it is to screw 
things up and how badly things fail. For example, CBC mode will degrade 
proportionally with how poorly IVs are selected. CTR, on the other hand, can 
degrade catastrophically with poor nonces.

Another example is that we prefer ciphers which are not vulnerable to related 
key attacks even though we expect good system design to not use related keys in 
the first place.

I’m suggesting that when offering advice to application developers on what 
sorts of systems to use, we should explicitly consider how easy it is for them 
to screw it up and how bad things get when they do.

Cheers,

-j



smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fatal flaw in Taiwanese smart card RNG

2013-09-16 Thread Jeffrey Goldberg
On 2013-09-16, at 11:56 AM, Seth David Schoen sch...@loyalty.org wrote:

 Well, there's a distinction between RNGs that have been maliciously
 designed and RNGs that are just extremely poor

This has been something that I’ve been trying to learn more about in the past 
week or so. And if this message isn’t really appropriate for this list, please 
suggest alternatives.

Roughly, I’m trying to figure out how bad the (allegedly) RNGs that come with 
Apple or Microsoft Operating systems could be without that badness being 
discovered. I’m not talking about the spectacularly awful RNG apparently used 
for Taiwanese Citizen Digital Certificate cards.   Previously I’d only thought 
about this in terms of accidental badness; it’s hard to get these things right. 
But more recently, I’ve had to think in terms of malicious implementation.

Now when I say, “I’ve had to think about” it means little more than me just 
thinking about it. I don’t have the skills or training necessary to actually 
make much progress in my thinking.

My primary concern is with symmetric key generation.  For my application, I’m 
not looking at streams, and I’m only generating a small number of keys.  So 
really the question is if I grab 32 bytes from, say, Apple’s 
SecCopyRandomBytes(), how unsuitable is that to use directly as a key. I 
*think* an equivalent way of asking this question is do we have an estimate of 
how big of a Statistical Distance there could be between these RNGs and a 
uniform distribution without it being detected by public research?

I believe that I can compensate for a non-negligible SD from uniform by just 
getting for data than the target key length and using something like HKDF(). 
But is the same approach appropriate if I consider that the RNG may be 
maliciously bad, but still not bad enough to have been detected?

It seems that a one way to look for a large SD from uniform is to just fetch 
lots of data and look for collisions. (And without having to look for 
collisions of factors of public keys as we direct access to the output of the 
RNG.) But I”m not sure whether that is sufficient to demonstrate that I am safe 
using these RNGs as I do.

 It sounds like such extremely poor RNGs are getting used in the wild
 quite a bit, and these problems might well be detected by more
 systematic and widespread use of these researchers' techniques. It's

 true that a maliciously designed RNG would not be detected this way.

If we had access to the private keys, the factors, generated would checking for 
collisions be sufficient for identifying a maliciously designed RNG? I have no 
clear intuition on whether that would be sufficient, and I really need to not 
trust my intuitions anyway.

I’m interested in this both practically (so I can take appropriate defensive 
measures in app development) and also because I find this stuff fascinating. 
But I should acknowledge that my linear algebra sucks. I was able to fully 
understand everything in this research until talk a matrix as generator for a 
lattice.

smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] no-keyring public

2013-08-24 Thread Jeffrey Goldberg
Szervusz Kristián.

On August 24, 2013 at 11:29:57 AM, Krisztián Pintér (pinte...@gmail.com) wrote:
so the usual thing is to create a key pair, store the private key encripted 
with a password. we automatically get a two factor authentication, we have a 
know and a have. 
Yep. We need both the private key file and the password to decrypt it. I’ve 
called this “one and a half factor” at times.

how about this? stretch the password with some KDF, derive a seed to a PRNG, 
and use the PRNG to create the the key pair. if the algorithm is fixed, it will 
end up with the same keypair every time. voila, no-keyring password-only public 
key cryptography. 
I’m not sure why this would be preferable to simply storing the password 
protected private key in a public place. It has the identical benefits in that 
the user doesn’t need to maintain and copy their private key from place to 
place, and it shares the same basic problem (you need a very good KDF and 
password), but it introduces other problems:

1. In your system the KDF for creating the seed to PRNG can’t be salted. And so 
two people with the same password will end up with the same key pair. (You 
could store the salt in some public place, but if you are doing to do that, you 
might as well store the encrypted private key.)

2. You can’t change your password without changing your key pair. (Though 
password changes don’t do a lot of good with the current system either.)

3. Key generation is slow and complex, presenting a greater opportunity for 
side channel attacks.

4. This means that we can never improve key generation. The particular 
heuristics that are used know with the identical parameters are things things 
that we will be stuck with.

5. Key generation is slow (as you mentioned)

If your goal is to not have to have people keep track of their private key 
files, I’m not sure that this is a good way to do that. (Though I recently 
encountered this problem. I didn’t have my private keys on my “travel” laptop. 
I thought I’d saved them in my password manager, but it turns out I’d only 
saved the public keys.)

Szia,

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] no-keyring public

2013-08-24 Thread Jeffrey Goldberg
On August 24, 2013 at 1:41:27 PM, Ben Laurie (b...@links.org) wrote:

On 24 August 2013 19:14, Krisztián Pintér pinte...@gmail.com wrote:

 1. In your system the KDF for creating the seed to PRNG can’t be
 salted.

nope, it can't be.

Can it not? A distributed store for salts seems possible...
OK, “can’t” was too strong of a word. But it appears to me that any mechanism 
for delivering the salts might as well just deliver the encrypted private key. 
And such a system would undermine the original intent (as I understand it) of 
the proposal.

That is, if I understand the original intent it is so that the user doesn’t 
need to carry their (encrypted) public key with them. All they ever need to 
know is their password.  If they need to know their password and their salt, 
then either

(1) that salt gets distributed when they needed it, or

(2) they need to carry the salt with them

In either case, there is no advantage (unless I’ve missed some point) in just 
distributing/managing the salt over distributing/managing the encrypted private 
keys.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] best practices for hostname validation when using JSSE

2013-08-09 Thread Jeffrey Goldberg
On Aug 9, 2013, at 1:49 PM, Tim Dierks t...@dierks.org wrote:

 the easiest thing to do is make sure the cert chains up to a root you trust 
 (ideally not system-installed roots, because nobody knows how deep the sewage 
 flows there

I recently had the opportunity to participate (as a relatively silent observer) 
in a conversion among people who did have an inkling of how deep the sewer 
flows there. I had known things were bad, but I had no idea of how bad.

Let’s just say I whole-heartedly endorse the idea of pinning your own roots in 
applications.


Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Grover's Algo Beaten?

2013-07-28 Thread Jeffrey Goldberg
On Jul 27, 2013, at 9:29 PM, Russell Leidich pke...@gmail.com wrote:

 Is this to be taken seriously...
 
 Massachusetts Institute of Technology professor Seth Lloyd claims to have 
 developed a quantum search algo which can search 2^N (presumably unsorted) 
 records in O(N) time.

Grover’s original paper included a proof that his result was near a lower bound.

I don’t understand QM well enough (my linear algebra sucks) to have understood 
the proof sufficiently to see clearly what assumptions it relies on.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] iMessage backdoor? [Was: skype backdoor confirmation]

2013-05-17 Thread Jeffrey Goldberg
On 2013-05-16, at 3:50 PM, william yager will.ya...@gmail.com wrote:

 I'm curious how, when I add a new device to my iMessage account, all my old 
 IMs show up in the chat history on the *new* device. If my understanding is 
 correct, it appears that someone who possesses a cleartext copy of the 
 messages is re-encrypting them with the new device's public key.

Off the top of my head, I can't think of a plausible explanation other than 
Apple keeping a copy of those messages around in either plaintext or in a form 
that they can decrypt on their own.

Cheers,

-j


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: Apple deluged by police demands to decrypt iPhones

2013-05-10 Thread Jeffrey Goldberg
On 2013-05-10, at 8:56 PM, Jeffrey Walton noloa...@gmail.com wrote:

 http://news.cnet.com/8301-13578_3-57583843-38/apple-deluged-by-police-demands-to-decrypt-iphones/

Let me highlight the last paragraph of the article:

 It's not clear whether that means Apple has created a backdoor for
 police -- which has been the topic of speculation in the past --
 whether the company has custom hardware that's faster at decryption,
 or whether it simply is more skilled at using the same procedures
 available to the government. Apple declined to discuss its law
 enforcement policies when contacted this week by CNET.

There is nothing in anything that we've seen that suggests that Apple is able 
to break a device passcode faster than, say, Elcomsoft, or any other similar 
tool.

Of course, I don't know that they don't have a backdoor, but it really doesn't 
sound like they are able to do any more than the numerous tools out there, 
which basically jailbreak, and then install a brute force cracker on the device.

Apple appears to have configured PBKDF2 on these devices for each passcode 
guess to take 250ms.  So a 4 digit passcode has a mean break time of 20 
minutes.  I've recommended that people use a minimum of 6 digits, which 
requires several weeks. If you have reason to believe that someone might put in 
even more effort, then use an alpha numeric passcode.

Apple may be more skilled at keeping the devices powered and running (or even 
overclocking) for extended cracking sessions; so I wouldn't be surprised if 
Apple was better at these than using off-the-shelf tools.

Cheers,

-j





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ICIJ's project - comment on cryptography tools

2013-04-08 Thread Jeffrey Goldberg
On Apr 8, 2013, at 7:38 AM, ianG i...@iang.org wrote:

 We all know stories.  DES is now revealed as interfered with, yet for decades 
 we told each other it was just parity bits.  

But it turned out that the interference was to make it *stronger* against 
attacks, differential cryptanalysis, that only the NSA and IBM knew about at 
the time. 

If history is a guide, weakness that TLAs insist on are transparent. They are 
about (effective) key size. We have no way to know whether this will continue 
to be the case, but I'd imagine that the gap in knowledge between the NSA and 
the academic community diminishes over time; so that makes me think that they'd 
be even more reluctant to try to slip in a hidden weakness today than in 1975. 

smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jeffrey Goldberg
[Reply-To set to cryptopolitics]

On 2013-03-28, at 12:37 AM, Jeffrey Walton noloa...@gmail.com wrote:

 On Wed, Mar 27, 2013 at 11:37 PM, Jeffrey Goldberg jeff...@goldmark.org 
 wrote:

 ... In the other cases, the phones did have a passcode lock, but
 with 1 possible four digit codes it takes about 40 minutes to run
 through all given how Apple has calibrated PBKDF2 on these (4 trials per
 second).

 Does rooting and Jailbreaking invalidate evidence collection?

That is the kind of thing that would have to be settled by case law, I don't
know if evidence gathered this way has ever been been offered as evidence in
trial. (Note that a lot can be used against a suspect during an investigation
without ever having to be presented as evidence at trail.)

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?

You seem to be talking about a single iOS 4 hardware key. But each device
has its own. We don't know if Apple actually has retained copies of that.

 I suspect Apple has the methods/processes to provide it.

I have no more evidence than you do, but my guess is that they don't, for
the simple reason that if they did that fact would leak out. Secret
conspiracies (and that's what it would take) grow less plausible
as a function of the number of people who have to be in on it.
(Furthermore I suspect that implausibility rises super-linearly with
the number of people in on a conspiracy.)

 I think there's much more to it than a simple brute force.

We know that those brute force techniques exist (there are several vendors
of forensic recovery tools), and we've got very good reasons to believe
that only a small portion of users go beyond the default 4 digit passcode.
In case of LEAs, they can easily hold on to the phones for the 20 minutes
(on average) it takes to brute force them.

So I don't see why you suspect that there is some other way that only
Apple (or other relevant vendor) and the police know about.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jeffrey Goldberg
On 2013-03-28, at 10:42 PM, Jon Callas j...@callas.org wrote:

 On Mar 28, 2013, at 6:59 PM, Jeffrey Walton noloa...@gmail.com wrote:
 
 We've seen it in the past with for example, Apple and location data,

 Well, with locationgate at Apple, that was a series of stupid and unfortunate 
 bugs and misfeatures. Heads rolled over it.

There are a couple interesting lessons from LocationGate. The scary 
demonstrations were out and circulated before the press and public realized 
that what was cached were the location of cell towers, not the phones actual 
location and that there was a good reason for caching that data. But I suspect 
that the large majority of people who remember that, still are under the 
impression that Apple was arbitrarily storing the the actual locations of the 
phone for no good reason.

The scare story spread quickly, with the more hyperbolic accounts getting the 
most attention. The corrective analysis probably didn't penetrate as widely.

The second lesson has to do with the the status of iOS protection classes that 
can leave things unencrypted even when the phone is locked. There are things 
that we want our phones to do before they are unlocked with a passcode. We'd 
like them to know which local WiFi networks they can join and we'd like them to 
precompute our locations so that that is up and ready as soon as we do unlock 
the phones. As a consequence things like WiFi passwords are not (or at least, 
were not) stored in a way that are protected by the device key. The data 
protection classes NSFileProtectionNone and 
NSFileProtectionCompleteUntilFirstUserAuthentication have legitimate uses, but 
it does lead to cases where people may thing that some data is protected when 
their device is off or locked which in fact isn't.

The trick is how to communicate this the people, most of whom do not wish to be 
overwhelmed with information.  There are lots of other things like this 
(encrypted backups and thisDeviceOnly, 10 seconds after lock before keys are 
erased, etc) that really people ought to know. The information about these 
isn't secret, Apple publishes it. But it takes some level of sophistication to 
understand; but mostly what it takes is interest.

 In neither of those cases was anyone trying to spy. In each differently, 
 people were building cool features and some combination of bugs and failure 
 to think it through led to each of them. It doesn't excuse mistakes, but it 
 does explain them. Not every bad thing in the world happens by intent. In 
 fact, most of them don't.

What's the line? Never attribute to malice what can be explained by 
incompetence.

At the same time we are in the business of designing system that will protect 
people and their data under the assumption that the world is full of hostile 
agents. As I like to put it, I lock my car not because I think everyone is a 
crook, but because I know that car thieves do exist.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-27 Thread Jeffrey Goldberg
On Mar 24, 2013, at 5:30 PM, Jeffrey Walton noloa...@gmail.com wrote:

 I wonder how they are doing it when other tools fails.

The article explained how they do it.  The case they described said the phone 
had no passcode lock, so the data on the phone would not have been encrypted.  
In the other cases, the phones did have a passcode lock, but with 1 
possible four digit codes it takes about 40 minutes to run through all given 
how Apple has calibrated PBKDF2 on these (4 trials per second). 

I've been recommending that people turn off simple passcode on iOS devices 
and move to at least six digits. If your non-simple passcode is all digits, you 
are still get the numeric keypad. 

I've written about all that here

http://blog.agilebits.com/2012/03/30/the-abcs-of-xry-not-so-simple-passcodes/

when there was some hyperbolic claims about breaking into iPhones.

smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Apple Keychain (was Keyspace: client-side encryption for key/value stores)

2013-03-25 Thread Jeffrey Goldberg
[Posted to list only]

On 2013-03-25, at 8:02 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 Another nice thing Apple have done, which no-one else has
 managed so far, is to get people to actively use the Keychain API and
 capabilities.

I just looked in my login (default) OS X Keychain for Application Passwords
that aren't from Apple supplied applications. I found 27 distinct applications
used. (I suspect that I also have a bunch of Login Passwords that are tied
to non-Apple applications as well, but don't have a convenient way to count
these).

The first versions of 1Password (the password management software
I've involved with) used the OS X Keychain for the site passwords we stored.
(There were reasons why we moved away from the OS X keychain, most notably
because MobileMe syncing of keychains wasn't reliable). It used a distinct
Keychain from the user's login Keychain.

In later versions of 1Password we used the OS X keychain only for
the purposes that Keyspace seems designed for. We had different components
that needed to talk to each other security (The stuff that ran the browser
plug-ins and the main application). So using the OS X Keychain to restrict
some data to specific applications was a good solution for us.

Now, with browser sandboxing and extension requirements, we can't use that
same technique (we can't write pure JavaScript extensions that make use of
the OS X Keychain, and so now use a websocket daemon running on localhost)
and we want a solution that works across platforms. So something like Keyspace
may be the sort of thing we will have to rely on. We are also looking at
whitebox cryptography so that at least we will have some theory behind how
good (or bad) our obfuscation is.

Basically, we'd love to have access to something like the OS X Keychain
everywhere. It worked, and we didn't have to develop our own techniques
for managing secrets needed by multiple related applications.

Cheers,

-j

–- 
Jeffrey Goldberg
Chief Defender Against the Dark Arts @ AgileBits
http://agilebits.com
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Mirror of NSA Cryptologs

2013-03-22 Thread Jeffrey Goldberg
On 2013-03-20, at 8:04 PM, John Young j...@pipeline.com wrote:

 NSA website seems overloaded. Mirror and index of declassified 
 NSA Cryptologs 1974-1997: 
 
 http://cryptome.org/2013/03/cryptologs/00-cryptolog-index.htm 

I just want to thank you and others for mirroring, OCRing, and
indexing this stuff. I've just been dipping in unsystematically,
but have found it fascinating.

Cheers,

-j



smime.p7s
Description: S/MIME cryptographic signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography