Re: [cryptography] The Heartbleed Bug is a serious vulnerability in OpenSSL

2014-04-08 Thread Nico Williams
On Mon, Apr 07, 2014 at 11:02:50PM -0700, Edwin Chu wrote:
 I am not openssl expert and here is just my observation.
 [...]

Thanks for this analysis.

Sadly, a variable-sized heartbeat payload was probably necessary, at
least for the DTLS case: for PMTU discovery.

Once more, a lack of an IDL, standard encoding, and tools, has hurt us.
Hand-coded parsers/encoders are disasters waiting to happen.

The TLS ad-hoc message syntax and encoding are not even adhered to
consistently in all the extensions, so I'm not sure that we could fix
this problem now (in TLS 1.3, say).  There was a thread on the TLS WG
list about this a while back...  Fixing this in 1.3 wouldn't fix the
implementations.  Making tooling available wouldn't either: it's very
difficult to retrofit an IDL compiler into a codebase with hand-coded
coders -- it's so difficult that it may be easier to build codebase-
specific IDL compilers.  Plus waiting for tooling would delay other
important enhancements.

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] The Heartbleed Bug is a serious vulnerability in OpenSSL

2014-04-08 Thread Nico Williams
On Tue, Apr 08, 2014 at 01:12:25PM -0400, Jonathan Thornburg wrote:
 On Tue, Apr 08, 2014 at 11:46:49AM +0100, ianG wrote:
  While everyone's madly rushing around to fix their bitsbobs, I'd
  encouraged you all to be alert to any evidence of *damages* either
  anecdotally or more firm.  By damages, I mean (a) rework needed to
  secure, and (b) actual breach into sites and theft of secrets, etc,
  leading to (c) theft of property/money/value etc.
  
 [[...]]
  
  E.g., if we cannot show any damages from this breach, it isn't worth
  spending a penny on it to fix!
 
 This analysis appears to say that it's not worth spending money to
 fix a hole (bug) unless either money has already been spent or damages
 have *already* occured.  This ignores possible or probable (or even
 certain!) *future* damages if no rework has yet happened.

The first part (gather data) is OK.  The second I thought was said
facetiously.  It is flawed, indeed, but it's also true that people have
a hard time weighing intangibles.

I don't know how we can measure anything here.  How do you know if your
private keys were stolen via this bug?  It should be possible to
establish whether key theft was feasible, but establishing whether they
were stolen might require evidence of use of stolen keys, and that might
be very difficult to come by.  We shouldn't wait for evidence of use of
stolen keys!

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Client-side Dual_EC prevalence? (was Re: Extended Random is extended to whom, exactly?)

2014-04-01 Thread Nico Williams
On Mon, Mar 31, 2014 at 12:45 PM, Stephen Farrell
stephen.farr...@cs.tcd.ie wrote:
 The paper [2] also has more about exploiting dual-ec if you
 know a backdoor that I've not yet read really.

 [2] http://dualec.org/

That paper talks about servers.  What is the prevalence of Dual_EC on
the client-side of TLS?

Assuming most TLS usage involves RSA key transport -a fair assumption
given the well-noted non-use of PFS until recent times- the client's
RNG is more critical than the server's.

I realize that client-side prevalence is harder to measure.  Still,
since Dual_EC was in the Java and SChannel stacks, it seems reasonable
to conclude that client-side Dual_EC penetration was quite high at its
peak, but is that right?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Compromised Sys Admin Hunters and Tor

2014-03-24 Thread Nico Williams
On Sat, Mar 22, 2014 at 12:59 AM, Stephan Neuhaus
stephan.neuh...@tik.ee.ethz.ch wrote:
 On 2014-03-22, 04:28, Nico Williams wrote:
 Insiders are always your biggest threat.

 I'm always interested in empirical evidence for the things that we
 believe to be true. Do you have any?

[The context was sysadmins, who generally wield a lot of power.]

Anecdotal, yes.  I'm not sure if I'm at liberty to discuss any of the
events of which I have close knowledge, though one of them was in the
news at the time (that is, I'm not sure if I'm at liberty to discuss
the details).  In the largest incident I've close knowledge of a
laid-off sysadmin left a time bomb in thousands of servers that caused
significant downtime for the business' customers.

And then there's Mr. Snowden...

...and the long line of insiders who spied against their nations,
versus the number of outsiders who made it through whatever
technological barriers were in their way.

Even if you limit yourself to the Internet era, the most famously
damaging attacks I can think of were all insider attacks.  Many were
not attacks in the sense of security attacks like buffer
overflows, say, but rather in the sense of actions that went beyond
legitimate access and badly damaged a business (Nick Leeson, anyone?).

It stands to reason that insiders who have vast and/or intimate
knowledge, and legitimate access to a business' resources, have a lot
of power to cause damage.  By definition they have more capacity to
cause immediate damage than outsiders.  Whether insiders are the
biggest threat in the sense of probability is, of course, not easy to
predict and largely irrelevant: they are the first threat to protect
against.

I'm not sure that empiricism has any place in this very particular
matter; without the insiders on your side, you stand no chance against
outsiders.  So I'm not sure what you're asking for...  Even if there
was little data as to actual attacks by insiders, that would not mean
that insiders are not a danger, and even if individual insider risk
were empirically far lower than outsider risk, that would not mean
that the total damage an insider could cause is far less than that
which outsiders can cause.

Which isn't to say that outsiders must not be protected against.  Of
course security in depth is critical -- and the right approach.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Compromised Sys Admin Hunters and Tor

2014-03-21 Thread Nico Williams
On Fri, Mar 21, 2014 at 7:01 AM, John Young j...@pipeline.com wrote:
 Sys admins catch you hunting them and arrange compromises
 to fit your demands so you can crow about how skilled you are.

Insiders are always your biggest threat.

 Then you hire them after being duped as you duped to be hired.

 The lead Tor designer reportedly (via Washington Post) had a
 session with NSA to brief on how to compromise it, although
 compromise was not used nor is the word used by
 gov-com-org-edu.

Er, so?  The NSA could just... read the public docs and source
anyways.  I'd personally love to be able to sit down with NSA
cryptonerds and chat -- if they talked at all I'd learn something.  As
long as there was no coercion anyways.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] pie in sky suites - long lived public key pairs for persistent identity

2014-01-03 Thread Nico Williams
On Fri, Jan 3, 2014 at 1:42 PM, coderman coder...@gmail.com wrote:
 - are you relieved NSA has only a modest effort aimed at keeping an
 eye on quantum cryptanalysis efforts in academia and other nations?

But clearly you must not be.

If you want to assume quantum cryptanalysis then you should only use
ECDH when you can protect the public keys with something like NTRU
(that is, if you must exchange public keys over an insecure network at
all) that we think is impervious to quantum cryptanalysis.  Once you
have that then IMO the DJB curves look pretty good.  Once you have
session keys you can use AES in any reasonable AEAD mode (by generic
composition with HMAC, with SHA-3, GCM, whatever) if you like (and I
would, provided the implementation is constant-time).

Why do you need working keys?  Mostly for session management reasons
(traffic analysis alert!).  If you can avoid the need for
distinguishing between long-term and working keys and you can
physically distribute public ECDH keys and then keep them secret then
you don't even need NTRU.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] does the mixer pull or do the collectors push?

2013-11-28 Thread Nico Williams
Power management is an issue.  Therefore entropy collection cannot be
periodic, not with high frequency anyways.

Instead collection must happen as needed and/or opportunistically, and
as much entropy should be collected as possible without increasing
latency by too much.  Opportunistic collection is fine, but
opportunistic pushing into the mixer when there's no demand for CSPRNG
outputs... is not power management friendly.

Therefore:

 - for initial CSPRNG seeding  the question is irrelevant: whether
push or pull, we must wait until we have enough entropy from all the
sources at hand;

   (since we're blocking and there may/should be multiple sources,
some of which may involve slow I/O, async/concurrent polling is
necessary so as to wait no longer than we must wait for the slowest
source)

 - for opportunistic (but not periodic, at least not with short
periods) mixing-in of new seeds, mixers should use consume entropy
from all pools that have available entropy, without discriminating as
to origin (since, after all, we're positing a properly-seeded CSPRNG).

Remember, we started this long set of long threads because of a paper
about /dev/urandom robustness.  The assumption is partial compromise,
and the goal is to recover (i.e., cause the attacker's knowledge of
CSPRNG state to become stale) quickly.  Suspend/resume, buses that
allow external device-initated DMA to all physical memory, ... these
are potential sources of partial compromise.

Anytime there's a suspend event the system should encrypt sensitive
state (e.g., filesystem keys, CSPRNG state), and decrypt it on resume.
 Since the encryption key in that case is likely to be a
password-derived (i.e., weak) key, or it may be missing altogether
(the user doesn't want to bother), CSPRNG reseeding on resume is...
highly desirable.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Email is unsecurable

2013-11-27 Thread Nico Williams
On Mon, Nov 25, 2013 at 09:51:41PM +, Stephen Farrell wrote:
 New work on improving hop-by-hop security for email and other
 things is getting underway in the IETF. [1] Basically the idea

I see nothing in the proposed charter you linked to about hop-by-hop
security.

I could imagine something like Received headers to document how each
SMTP (and SUBMIT) end-point was authenticated (if they were) along a
mail transfer path.  This would be of some utility, particularly for
*short* paths (MUA-MSA-MTA-mailbox); for longer paths this loses its
utility.

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Email is unsecurable

2013-11-27 Thread Nico Williams
On Wed, Nov 27, 2013 at 06:02:08PM +, Stephen Farrell wrote:
 On 11/27/2013 05:42 PM, Nico Williams wrote:
  On Mon, Nov 25, 2013 at 09:51:41PM +, Stephen Farrell wrote:
  New work on improving hop-by-hop security for email and other
  things is getting underway in the IETF. [1] Basically the idea
  
  I see nothing in the proposed charter you linked to about hop-by-hop
  security.
 
 Isn't the Using TLS part enough? At least for the applications
 listed. Could be worth adding a sentence to the charter though
 I guess.

Maaayyybe.

  I could imagine something like Received headers to document how each
  SMTP (and SUBMIT) end-point was authenticated (if they were) along a
  mail transfer path.  This would be of some utility, particularly for
  *short* paths (MUA-MSA-MTA-mailbox); for longer paths this loses its
  utility.
 
 Not sure I get the utility there, at least as in scope for
 this proposed WG. Do you mean the receiving MUA would display
 the message differently or something?

Yes.  You get an e-mail from me.  Your edge MTA authenticated my MTA and
my MTA claims to have authenticated me.  Add in a signature by my MSA
over the relevant headers and body and your MUA can display my e-mail as
more authenticated than one that transited a non-secure link (or where
the transfer path was longer).

Note that there'd be a need for using DANE to authenticate MTAs acting
as *clients*.  There'd be no need to prove the use of DANE for
authenticating MTAs acting servers: if the mail gets to its intended
destination, and the transit path is the shortest possible path, then
we're good to go.  But it'd still be desirable to use DANE for
authenticating MTAs acting servers: to prevent e-mail falling into the
wrong hands.

Note too that if a path is longer than the absolute shortest possible
path then it's difficult to verify that there was no MITM in the path.
That is, an MTA along the path could have had its DNS MX RR lookups
spoofed so as to transfer my email to you via an MITM (who then gets to
see it).  It's not like a client can prove to a server that the client
used DANE to authenticate the server, so the servers can't protect
against this.  The shortest path will generally be: my MUA-my MSA, my
MTA - your MTA, your MTA - your mailbox.  Internal MTA hops on my
and/or your side are irrelevant and can be noted as such or even not
recorded (assuming our respective internal networks are secure).  Policy
could be used to validate longer-than-shortest paths, but in practice
just the shortest-path approach will suffice.

 There might be an idea there though if some of the hops used
 e.g. anon-DH and someone developed a generic witness protocol
 to help try spot MITM attacks on that, and if the MSA and MTAs
 DKIM-sign messages, then a message header field containing the
 inbound  outbound witness-protocol PDUs that was included in
 the DKIM signature could be good.

If there's just anon-DH it's not terribly useful except as a way to
bootstrap up to using DANE.  If you use DANE then you get the above
property (all hops authenticated == much better than one [or more] hops
not authenticated).

 That sounds like it'd be a bit out the scope for UTA but if
 that's  what you meant (or similar) but I'd say a mail to
 apps-discuss on that would be useful.

Right.

 But I don't think we'd want the UTA WG to be the one to
 develop a protocol for how to post-facto spot a MITM on anon-DH
 or other TLS sessions though. (Anyone got suggestions for that
 btw? Probably a different thread though.)

Agreed.

 (And yes, the above would depend on DKIM public key records in
 the non-DNSSEC DNS, so a DANE like thing and DNSSEC would be
 stronger, but given that lots of large and small mail services
 already do DKIM and don't change their keys that often, even
 the non-DNSSEC thing might be good enough.)

I'd prefer the hop-by-hop DANE thing for e-mail.  It makes much more
sense.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Email is unsecurable

2013-11-27 Thread Nico Williams
On Wed, Nov 27, 2013 at 08:01:19PM +, Stephen Farrell wrote:
 On 11/27/2013 06:58 PM, Nico Williams wrote:
  [...]
 
 I'm not sure detecting the path length in terms of ADMDs is so
 easy, not so useful in terms of MTAs (with all the spam checking

Sure it is!  Nowadays the path should generally be:

sender's MUA- sender's MSA
sender's MSA- sender's MTA (this is generally internal, and
 anyways it can be marked as such)
sender's MTA- recipient's MTA
recipient's MTA - recipient's mailbox

Internal-to-sender/recipient's-infra hops are irrelevant/uninteresting.

Now, a recipient might use a third-party MX, but the recipient's MUA can
check that the MX RRs for the recipient's domain match the unexpected
path element.  (MX RRs can change, which can cause the path validation
results for an e-mail with non-shortest path to change over time.)

 that can go on), nor that the above is really explicable to users.
 We'd need to ask some mail folks.

No need to explain to users.  The MUA either validates the path and
marks the mail as verified to be from the sender, or... not.  It's a
boolean.  Users can be expected to understand a boolean.

Of course, it doesn't help the user that a phisher is authenticated;
phishers and spammers would be the first to implement this, as with
DKIM.  But the MUA can also AND the sender's presence in the recipient's
address book to the expression producing that boolean result.

 But I like the emerging scheme below a good bit more:-)

:)

 The problem with DANE is the lack of DNSSEC. If we had both [...]

When I refer to DANE, I also mean that DNSSEC must be there.  We're
getting there.

 Otherwise I think we're in agreement and I'll send a pointer
 to this sub-thread to apps-discuss so follow up can happen
 there. (I think you're on that list right?)

I am.  Thanks for sending this there.

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Email is unsecurable

2013-11-27 Thread Nico Williams

Viktor Dukhovni says that anything like DKIM/SPF is bound to fail.

One problem is confusables: users can't really distinguish them, and
some can be counted on just doing whatever it takes to give their money
to the phisher, no matter what.  In other words, the problem with e-mail
is that strangers can start conversations with you.  (Whereas with web
services you start the conversations with them, which is not as big a
problem.)

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] New cipher

2013-11-02 Thread Nico Williams
On Saturday, November 2, 2013, Roth Paxton wrote:

 Check out www.cryptographyuniversal.com


The first few paragraphs are incomprehensible and defensive.  A perfect
sign that reading further is a waste of time.  If the author's paper was
rejected so and so then telling the world that they're just jealous and
just give me a chance... isn't going to work.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptographic agility (was: Re: the spell is broken)

2013-10-05 Thread Nico Williams
On Fri, Oct 4, 2013 at 11:48 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 On 2013-10-04, at 10:46 PM, Patrick Pelletier c...@funwithsoftware.org 
 wrote:
 On 10/4/13 3:19 PM, Nico Williams wrote:

 b) algorithm agility is useless if you don't have algorithms to choose
 from, or if the ones you have are all in the same family.

 Yes, I think that's where TLS failed.  TLS supports four block ciphers with 
 a 128-bit block size (AES, Camellia, SEED, and ARIA) without (as far as I'm 
 aware) any clear tradeoff between them.

Well, maybe I was too emphatic.  I didn't mean that a protocol like,
say, TLS, should be born with a large number of ciphersuites.  It
needs to be born with *two* (of each negotiable cryptographic
primitive): to prove algorithm agility works.  Also, none of this
one-integer-to-name-combinations-of-all-algorithms -- key exchange,
authentication, and KDF, should all be negotiated separately from
session ciphers (but cipher modes, OTOH, should not be negotiated
separately from ciphers).  The rationale is that a cartesian product
of algorithms in a manual registry -and with small integers!- is not
really manageable.  Some cipher modes can be separated from ciphers,
but there's relatively few combinations of ciphers and cipher modes,
so no need to separate them.

 The AES “failure” in TLS is a CBC padding failure. Any block cipher would 
 have “failed” in exactly the same way.

Indeed.  3DES and AES both failed because of CBC IV chaining without
randomization in SSHv2.  Any block cipher would have failed in the
same situation because the failure was the *mode*'s.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-04 Thread Nico Williams
On Fri, Oct 4, 2013 at 4:58 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 On 2013-10-04, at 4:24 AM, Alan Braggins alan.bragg...@gmail.com wrote:

 Surely that's precisely because they (and SSL/TLS generally) _don't_
 have a One True Suite, they have a pick a suite, any suite approach?

 And for those of us having to choose between preferring BEAST and RC4
 for our webservers, it doesn’t look like we are really seeing the expected
 benefits of “negotiate a suite”.  I’m not trying to use this to condemn the
 approach; it’s a single example. But it’s a BIG single example.

That's because so many ciphersuites shared the same damned problems.

When we went through the chained CBC problems in SSHv2 at least we had
CTR modes to fallback on.

There's a lesson here.  I'll make it two for now:

a) algorithm agility *does* matter; those who say it's ETOOHARD should
do some penitence;

b) algorithm agility is useless if you don't have algorithms to choose
from, or if the ones you have are all in the same family.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] the spell is broken

2013-10-04 Thread Nico Williams
On Fri, Oct 4, 2013 at 6:55 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 b) algorithm agility is useless if you don't have algorithms to choose
 from, or if the ones you have are all in the same family”.

 Yep.

 And even though that was the excuse for including Dual_EC_DRBG among the
 other DBRGs, doesn’t take away from the what you say.

I've never seen this reason given as an excuse for having Dual_EC
(though I can believe it).  I was referring to ciphersuites anyways;
one does not negotiate RNGs, after all!  (But, yes, RNGs frameworks
should be pluggable.)

 I would add a third.

 c) The set of suites need to be maintained over time, with a clear way to
 signal deprication and to bring new things in. If we are stuck with the
 same set of suites that we had 15 years ago, everything in there may age
 badly.

Legacy is a difficult problem.  We should be less afraid to cut old
things off, but... it always proves too risky, so instead we hobble
along until the risk of continuing to allow very old legacy code to
interop overwhelms the risk of disabling interop with said old code.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] A question about public keys

2013-09-29 Thread Nico Williams
I should add that the ability to distinguish public DH keys from
random is a big deal in some cases.  For example, for EKE: there's a
passive off-line dictionary attack that can reject a large fraction of
possible passwords with each EKE iteration -- if that fraction is 1/2
then after about 20 rounds of EKE you'll have a very high likelihood
of having recovered the user's password.  This example is hinted at in
the Elligator paper (the paper's focus being on privacy protocols).
With Elligator (and randomly setting the one bit that is always zero
in curve25519 public keys), the passive attacker would have to observe
a very large number of EKE rounds before having enough evidence to
reject enough possible passwords (that yield public keys larger than
2^256 - 19) to have a good chance of recovering the actual password.
Elligator will be a great advance indeed, when it is available.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] secure deletion on SSDs (Re: Asynchronous forward secrecy encryption)

2013-09-24 Thread Nico Williams
On Tue, Sep 24, 2013 at 12:03:12AM +0200, Adam Back wrote:

[In response to the idea of using encrypted file hashes as part of the
key wrapping procedure...]

 Thats not bad (make the decryption dependant on accessibility of the entire
 file) nice as a design idea.  But that could be expensive in the sense that
 any time any block in the file changes, you have to re-encrypt the
 encryption or, more efficiently the key computed from the hash of
 the file. Still you have to re-write the header any time there is a
 block change,
 and do it atomically or log recoverably ideally.  Also you have re-read and
 hash the whole file to re-compute the xor sha(encrypted-file) header.  Well
 I guess even that is relatively fixable probably eg merkle hash of the
 blocks of the file instead plus a bit of memory cacheing.

You should want to do COW anyways.  If you use a Merkle hash tree then
the additional hashing is minimized.  You know, like ZFS.

Still, at the end of the day, if you can recover enough past blocks you
can recover deleted files.  Truly wiping anything requires being able to
at least wipe encryption keys (wrapped or otherwise), and since the
amount of truly wipeable storage is so limited... it's much harder to
support secure file deletion than secure filesystem/device wipe.  What
the OS could do is give you a smallish number of securely wipeable
containers, and you manage the rest from there.

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] very little is missing for working BTNS in Openswan

2013-09-13 Thread Nico Williams
On Thu, Sep 12, 2013 at 08:28:56PM -0400, Paul Wouters wrote:

 Stop making crypto harder!

I think you're arguing that active attacks are not a concern.  That's
probably right today w.r.t. PRISMs.   And definitely wrong as to cafe
shop wifi.

The threat model is the key.  If you don't care about active attacks,
then you can get BTNS with minimal effort.  This is quite true.

At least some times we need to care about active attacks.

 On Thu, 12 Sep 2013, Nico Williams wrote:
 Note: you don't just want BTNS, you also want RFC5660 -- IPsec
 channels.  You also want to define a channel binding for such channels
 (this is trivial).
 
 This is exactly why BTNS went nowhere. People are trying to combine
 anonymous IPsec with authenticated IPsec. Years dead-locked in channel
 binding and channel upgrades. That's why I gave up on BTNS. See also
 the last bit of my earlier post regarding Opportunistic Encryption.

It's hard to know exactly why BTNS failed, but I can think of:

 - It was decades too late; it (and IPsec channels) should have been
   there from the word (RFC1825, 1995), and even then it would have been
   too late to compete with TLS given that the latter required zero
   kernel code additions while the former required lots.

 - I only needed it as an optimization for NFS security at a time when
   few customers really cared about deploying secure NFS because Linux
   lacked mature support for it.  It's hard to justify a bunch of work
   on multiple OSes for an optimization to something few customers used
   even if they should have been using it.

 - Just do it all in user-land has pretty much won.  Any user-land
   protocol you can think of, from TLS, to DJB's MinimaLT, to -heck-
   even IKE and ESP over UDP, will be easier to implement and deploy
   than anything that requires matching kernel implementations in
   multiple OSes.

   You see this come up *all* the time in Apps WG.  People want SCTP,
   but for various reasons (NAATTTS) they can't, so they resort to
   putting an entire SCTP or SCTP-like stack in user-land and run it
   over UDP.  Heck, there's entire TCP/IP user-land stacks designed to
   go faster than any general-purpose OS kernel's TCP/IP stack does.

   Yeah, this is a variant of the first reason.

There's probably other reasons; listing them all might be useful.  These
three were probably enough to doom the project.

The IPsec channel part is not really much more complex than, say,
connected UDP sockets.  But utter simplicity four years ago was
insufficient -- it needed to have been there two decades ago.

Nico
-- 
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Compositing Ciphers?

2013-09-07 Thread Nico Williams
We have a purely (now mostly) all-symmetric key protocol: Needham-Schroeder
-- Kerberos.  Guess what: it doesn't scale, not without a strong dose of PK
(and other things).  Worse, its trusted third parties can do more than
MITM/impersonate you like PKI's: they get to see your session keys (unless
you add PFS, of course).  For PFS you need assymetric crypto.  To scale you
need asymmetric crypto *and* trusted third parties.  To communicate at all
you need peers to communicate with, peers who can turn on you, or just
plain screw up, or get conned.  Square #1, how well we know thee.
 Symmetric-only crypto isn't the answer, and evidently neither is PK
crypto.  With or without crypto, our problems are human problems.

A combination of PK and symmetric crypto is the best we can do in a
classical world, and transitive trust is the only way to scale to billions
(or even just a few tens of thousands) of people.  All of which means that
there will always be some degree of insecurity, as it always was before the
modern era, and as it has to be.  Because we have free will.  I don't know
what a post-quantum number factoring world will look like... a bit bleak I
guess, at least for a while, but hardly much bleaker than much of the past
one hundred years.

BTW, if it's the PRISMs that animate you: that is the land of politics;
and crypto is not the answer you seek, it's just a tool.   A tool that
might play a bi[tg] part in debates and their outcomes, but still, just a
tool, not a panacea.

[In theory Kerberos with hierarchical and web of trust could scale.   No
one has attempted to scale it past a few .EDUs and a few .MILs,.  With
PKINIT and PKCROSS -- bridges to PK[I] -- and trust routing it could
scale, and it'd then have roughly the properties PKI could have / should
have had with OCSP done right (i.e., stapled, and from the get-go).
 Kerberos still has a long life ahead of it in corporate and university
networks, I'm fairly certain of that.  But without PK it can't scale to
Internet scale.  I don't think any other all-symmetric key cryptographic
protocols can do better than Needham-Schroeder.]

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Compositing Ciphers?

2013-09-06 Thread Nico Williams
On Fri, Sep 6, 2013 at 7:27 PM, Jeffrey Walton noloa...@gmail.com wrote:
 I've been thinking about running a fast inner stream cipher (Salsa20
 without a MAC) and wrapping it in AES with an authenticated encryption
 mode (or CBC mode with {HMAC|CMAC}).

My own very subjective opinion is that assuming all of: constant time
implementations, an appropriate cipher mode, proper {key management,
RNG, local end-point security}, then AES is perfectly safe.

Of course, that's a lot of assumptions!  You'll almost certainly fail
at the local end-point security part.  Long before your choice of
ciphers is attacked your systems/protocols will have succumbed to
other, cheaper attacks -- assuming they are targeted at all.

 I'm aware of, for example, NSA's Fishbowl running IPSec at the network
 layer (the outer encryption) and then SRTP and the application
 level (the inner encryption). But I'd like to focus on hardening one
 cipherstream at one level, and not cross OSI boundaries.

If you have the hardware for it, that's fine.  I wouldn't bother
composing ciphers in any given layer.

 Has anyone studied the configuration and security properties of a
 inner stream cipher with an outer block cipher?

Well, yes, it's been studied.  Look for papers on 3DES, for example.
Make sure not to make mistakes that leave you susceptible to
meet-in-the-middle type attacks.  But, really, first make sure that
you've covered the other bases, the ones that are going to be your
achilles' heel if you don't, such that your adversaries have no choice
but to attack the crypto.  THEN concern yourself with improving the
crypto.

IMO.  Also, IANAC.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Compositing Ciphers?

2013-09-06 Thread Nico Williams
On Fri, Sep 6, 2013 at 8:05 PM, Jeffrey Walton noloa...@gmail.com wrote:
 I'm more worried about key exchange or agreement.

The list of things to get right is long.  The hardest is getting the
implementation right -- don't do all that work just to succumb to a
remotely exploitable buffer overflow.  Next up is physical security.
Then key management.  Then all the crypto stuff (ciphers, modes, MACs,
hash functions, ...).  Then the RNG  That's assuming off-the-shelf
crypto algorithms.

And then there's your trusted insiders/counterparties.  They are your
biggest risk of all, or possibly second biggest, after plain old
buffer overflows and similar.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Nico Williams
On Sat, Aug 17, 2013 at 12:50 PM, Jon Callas j...@callas.org wrote:
 On Aug 17, 2013, at 12:49 AM, Bryan Bishop kanz...@gmail.com wrote:
  Would providing (signed) build vm images solve the problem of distributing
  your toolchain?

A more interesting approach would be to use a variety of independently
sourced disassemblers to compare builds and check that object code
differences from one build to the next can be accounted for by
corresponding changes to the source code or build systems.  This is
not really tractable when you change compilers or their settings, but
at least you can get a pretty good idea as you develop of what object
code is being produced.  This is terribly time-consuming, but you can
automate the comparison process and archive results for post-mortems
as a deterrent.  You'd have to do this on multiple machines handled by
different people, and so on...

It's not too farfetched, see http://illumos.org/man/1onbld/wsdiff
(Solaris release engineering used to use this tool, and I imagine that
they still do).

 I *cannot* provide an argument of security that can be verified on its own.
 This is Godel's second incompleteness theorem. A set of statements S cannot
 be proved consistent on its own. (Yes, that's a minor handwave.)

No one can.  We're in luck w.r.t. the Thompson attack: it needs care
and feeding, as it will rot if not kept up to date.  Any effort to
make it clever enough to keep up with a changing code base is likely
to lead to the attack being revealed.  Any effort to maintain it risks
detection too.  Any effort to use it risks detection.  And today a
Thompson attack would have to hide from a multiplicity of disassemlers
(possibly run on uncompromised systems), decompilers, and, of course,
tracing and debugging tools that may work at layers that the generated
exploit cannot do anything about (e.g., DTrace) without the bugged
compiler having been used to build pretty much all of those tools.
That is, I wouldn't worry too much about the Thompson attack.

 All is not lost, however. We can say, Meh, good enough and the problem is
 solved. Someone else can construct a *verifier* that is some set of policies
 (I'm using the word policy but it could be a program) that verifies the
 software. However, the verifier can only be verified by a set of policies
 that are constructed to verify it. The only escape is decide at some point,
 meh, good enough.

Yes, it's turtles all the way down.  You stop worrying about far
enough turtles because you have no choice (and hopefully they are too
far to really affect your world).

 I hope I don't sound like a broken record, but a smart attacker isn't going
 to attack there, anyway. A smart attacker doesn't break crypto, or suborn
 releases. They do traffic analysis and make custom malware. Really. Go look
 at what Snowden is telling us. That is precisely what all the bad guys are
 doing. Verification is important, but that's not where the attacks come from
 (ignoring the notable exceptions, of course).

Indeed, the vulnerabilities from the plethora of bugs we
unintentionally create, overwhelm (or should, in any reasonable
analysis) any concerns about turtles below the one immediately holding
up the Earth.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-16 Thread Nico Williams
On Fri, Aug 16, 2013 at 2:11 PM, zooko zo...@zooko.com wrote:
 On Tue, Aug 13, 2013 at 03:16:33PM -0500, Nico Williams wrote:

 Nothing really gets anyone past the enormous supply of zero-day vulns in 
 their complete stacks.  In the end I assume there's no technological PRISM 
 workarounds.

 I agree that compromise of the client is relevant. My current belief is that
 nobody is doing this on a mass scale, pwning entire populations at once, and
 that if they do, we will find out about it.

That's fair, and true-enough, although you never know.  pwning
everyone is a very costly operation: you can only do it once for each
pwn, and the political risks and costs are high enough to put the
entire concept at risk.  But we've seen actors take some breathtaking
risks in recent years (e.g., Flame)...

 My goal with the S4 product is not primarily to help people who are being
 targeted by their enemies, but to increase the cost of indiscriminately
 surveilling entire populations.

That's fair, and a point that I should learn to make in general.  We
saw China back down from banning github -- that's a big clue that
sufficiently popular services have leverage against foreign
governments, and possibly local ones too.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-16 Thread Nico Williams
On Fri, Aug 16, 2013 at 7:24 PM, D. J. Bernstein d...@cr.yp.to wrote:
 I'm not saying that /dev/urandom has a perfect API.  [...]

It might be useful to think of what a good API would be.  I've thought
before that the Unix everything-as-a-file philosophy makes for lame
entropy APIs, and yet it's what we have to work with...

I'd like something like /dev/urandom128 - min. 128 bits of real
entropy in the pool.

I'd also wish open(2) of AF_LOCAL socket names were the same as a
connect(2) on the same thing, and to block like named pipe opens do
(why on Earth is this not so?  what could possibly break if it were
so?  considering that named pipe opens block... one would think
nothing could break).  Then we could have each open of /dev/prngN
result in a PRNG octet stream seeded by N bits of real entropy.

(I saw a blog post recently about using AF_LOCAL sockets as PID files.
 Making open(2) of them == connect(2) to them would make that an
awesome idea.)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-13 Thread Nico Williams
On Tue, Aug 13, 2013 at 12:02 PM, ianG i...@iang.org wrote:
 Super!  I think a commercial operator is an essential step forward.

A few points:

 - if only you access your own files then there's much less interest
for a government in your files: they might contain evidence of crimes
and conspiracies, but you can always be compelled to produce those

 - if you share files then traffic analysis will reveal much about
what you're up to, and there may be much interest in getting at your
files' contents.

 - commercial operators who give you software to run can compromise
(or allow governments to compromise) you even if they are not
technically an end-point[*] for your end-to-end protocols.

 - it's really not easy to defeat the PRISMs.  the problem is
*political* more than technological.

 - i'm not trying to detract from Tahoe-LAFS -- it's a spectacular
idea, I wish it well, and I generally endorse filesystems of this
sort.

[*]  In Tahoe-LAFS, ZFS, and any other similar filesystems, there is
only one end-point: the client(s); the server, in particular, is NOT
an end-point.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-13 Thread Nico Williams
On Tue, Aug 13, 2013 at 2:09 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 Although presumably there would be value in shutting down a
 privacy-protecting service just so that people can't benefit from it any
 longer. When the assumption is that everything must be public, any
 service that keeps some information non-public might be perceived as a
 threat.

This is the only way in which crypto helps against the PRISMs: when
legitimate business interests come to depend enough on services that
can neither easily be compromised by the PRISM nor easily be shut off
because of the large dependence on those services.  That's really more
a political effect than a technological one, though facilitated by
technology.

Nothing really gets anyone past the enormous supply of zero-day vulns
in their complete stacks.  In the end I assume there's no
technological PRISM workarounds.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Updated Certificate Transparency site

2013-08-01 Thread Nico Williams
On Thu, Aug 1, 2013 at 12:57 PM, wasa bee wasabe...@gmail.com wrote:
 in CT, how do you tell if a newly-generated cert is legitimate or not?
 Say, I am a state-sponsored attacker and can get a cert signed by my
 national CA for barclays. How do you tell this cert is not legitimate? It
 could have been barclays' IT admin who asked for a new cert.
 Do companies need to liaise with CT to tell them which certs are valid? Do
 they need to tell CT each time they change or get new certs?

CT allows the relying parties (e.g., TLS clients) only to verify that
the CA issued the cert in an auditable way.  Only the owners of
resources named by certs (or their agents) can meaningfully audit
certificate issuance.  If everyone does their part CT causes the risk
of dishonest CA behavior discovery to become to great for CAs to
engage in such behavior.

If you're in a position to know what CAs are allowed to issue certs
for a given name, then you can check for (audit) a) issuance of certs
for that name by unauthorized CAs, b) issuance of new certs by
authorized CAs but for unauthorized public keys.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] HKDF salt

2013-08-01 Thread Nico Williams
Two words: rainbow tables.

Salting makes it impossible to pre-compute rainbow tables for common
inputs (e.g., passwords).

Now, this HKDF is not intended for use as a PBKDF, so the salt
effectively adds no real value when the input key material is truly
random/unpredictable by attackers, which it damned well ought to be.
OTOH, if the IKM is weak, or if you don't know if it could be, then
salting defeats rainbow tables.

In other words: salting doesn't hurt, and might really help.  Salting is good.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Random number generator failure in Rasperri Pis?

2013-07-19 Thread Nico Williams
On Fri, Jul 19, 2013 at 4:52 PM, Lodewijk andré de la porte
l...@odewijk.nl wrote:
 2013/7/19 Mahrud S dinovi...@gmail.com
 Isn't the thermal noise a good enough entropy source? I mean, it's a $25
 computer, you can't expect much of it.

 See, sir, you shouldn't wonder why all your data isn't actually encrypted.
 You shouldn't think it's weird that nothing is secure on your pc. And that
 everyone can fake your digital signature shouldn't surprise you either. Your
 computer was only $25. I mean, what'd you expect?

Reminder: the blog post in question was about how *much* better the HW
RNG on the rpi was than some crappy PRNG.  A bit of a strawman, yes,
but no way can that even remotely be confused with a complaint about
the rpi's HW RNG.

 If it cannot do what it claims, than it shouldn't claim to be able to do so.
 We're application layer here, so the OS should put a stop to people getting
 bad random numbers. If that means the OS takes 20 seconds to make a random
 on a $25 pc, that's okay. It never guaranteed us to be quick. It's not okay
 to give us band random numbers. Ever.

 A hardware RNG is just another source of entropy I think. But it seems the
 Raspberry Pi's RNG should generate random numbers completely on its own.
 Without proofs that's a no-no. Not sure that FIPS test is enough proof.

The rpi's HW RNG is almost certainly better than many /dev/*random
implementations running as VM guests.  How much real business is
getting transacted on VMs nowadays?  Probably a lot.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 100 Gbps line rate encryption

2013-07-17 Thread Nico Williams
On Wed, Jul 17, 2013 at 7:42 AM, ianG i...@iang.org wrote:
 On 17/07/13 10:50 AM, William Allen Simpson wrote:
 Thing is, you don't just need an encryption algorithm, you also need IV,
 MAC, Padding concepts.  (I agree that using a stream cipher obviates any
 messing Padding needs and the 'mode' choice.)

Choices for dealing with padding:

 - accept padding

 - use a stream cipher

 - use a counter cipher mode (not unlike a stream cipher)

 - use ciphertext stealing modes

Kerberos uses CTS for AES, but it's proven to be painful.

My advice: accept the padding.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] authentication protocol proposal

2013-07-17 Thread Nico Williams
 Subject [cryptography] authentication protocol proposa

For authentication of what/whom, with what credentials, to what
target(s)?  Ah, users with passwords to some node with a password
verifier.

On Wed, Jul 17, 2013 at 4:54 PM, Krisztián Pintér pinte...@gmail.com wrote:
 hello,
 some benefits:

 [...]
 * any amount of data can be derived, and it is not costly (unlike PBKDF2)
 [...]

Well, so in general we want PBKDFs to be slow and require lots of RAM
as a defense against off-line password attacks on stolen password
verifiers.  Once you have a session key you should want to use a KDF,
not a PBKDF, because you need the KDF to be fast.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Heml.is - The Beautiful Secure Messenger

2013-07-12 Thread Nico Williams
[BTW, when responding to a message forwarded, do please fix the quote
attribution.]

On Fri, Jul 12, 2013 at 2:29 PM, ianG i...@iang.org wrote:
 This thread has been seen before.  On-chip RNGs are auditable but not
 verifiable by the general public.  So the audit can be done then bypassed.
 Which in essence means the on-chip RNGs are mostly suitable for mixing into
 the entropy pool.

 Not to mention, Intel have been in bed with the NSA for the longest time.
 Secret areas on the chip, pop instructions, microcode and all that ...  A
 more interesting question is whether the non-USA competitors are also
 similarly friendly.

I'd like to understand what attacks NSA and friends could mount, with
Intel's witting or unwitting cooperation, particularly what attacks
that *wouldn't* put civilian (and military!) infrastructure at risk
should details of a backdoor leak to the public, or *worse*, be stolen
by an antagonist.  I would hope that talented folks at the NSA would
be averse to embedding backdoors in hardware (and firmware, and
software) that they could lose control of, especially in light of
recent developments.  I'm *not* saying that my wishing is an argument
for trusting Intel's RNG -- I'm sincerely trying to understand what
attacks could conceivably be mounted through a suitably modified
RDRAND with low systemic risk.

For example, there might be a way to close a backdoor in a hurry,
should it leak.

Understanding the attacks that sigint agencies might mount in this
fashion might help us understand the likelihood of their attempting
them.

I think it's important to highlight the systemic risk caused by
embedding backdoors everywhere.  See Security Implications of
Applying the Communications Assistance to Law Enforcement Act to Voice
over IP, by Bellovin, Blaze, et. al.  Systemic failures can be
extremely severe.  The 2008 financial crisis was a systemic failure,
and, sadly, I can imagine far worse systemic failures.  Minimizing
systemic risk should be a key policy goal in general, but management
of systemic risk is inherently not in the interests of any short-term
political actors, therefore it's important to ensure institutional
inertia for systemic risk minimization.  The NSA that once worked to
strengthen DES against differential cryptanalysis clearly thought so
(or, rather, the people who made that happen did) -- is today's NSA no
longer interested in the nation's civilian and military security?!

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-03 Thread Nico Williams
On Tue, Jul 2, 2013 at 10:07 AM, Adam Back a...@cypherspace.org wrote:
 On Tue, Jul 02, 2013 at 11:48:02AM +0100, Ben Laurie wrote:

 On 2 July 2013 11:25, Adam Back a...@cypherspace.org wrote:

 does it provide forward secrecy (via k' = H(k)?).


 Resumed [SSL] sessions do not give forward secrecy. Sessions should be
 expired regularly, therefore.


 That seems like an SSL protocol bug no?  With the existence of forward
 secret ciphersuites, the session resumption cache mechanism itself MUST
 exhibit forward secrecy.

The whole point of session resumption is to make that fast.  It can't
be too fast if it implies public key cryptography.  Now, with ECC DH
it's probably fast enough anyways, so, yes, we should do this.

 Do you think anyone would be interested in fixing that?

It's already possible to resume then renegotiate with an anon ECC DH
cipher suite.  Oh, wait, no, anon ECC DH with AES cipher suites were
left out (by accident).  So the fix might just be to register the
missing cipher suites and always renego with one of those immediately
after resuming a session.  We could then work on a round-trip
optimized session resumption with PFS feature.

But first we'd have to get users to use cipher suites with PFS.  We're
not really there.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is the NSA now a civilian intelligence agency? (Was: Re: Snowden: Fabricating Digital Keys?)

2013-07-01 Thread Nico Williams
On Mon, Jul 1, 2013 at 3:37 AM, ianG i...@iang.org wrote:
 Hmmm.  Thanks, Ethan!  Maybe I'm wrong?  Maybe the NSA was always allowed to
 pass criminal evidence across to the civilian police forces.  It's a very
 strange world.

No, the doctrine of the fruit of the poisoned tree makes it
non-trivial to avoid the requirements of the 4th Amendment regarding
search and seizure.  The non-triviality is this: LEA must somehow hide
the warrant-less wiretapping (search) and produce a plausible path
(and chronology) for how they came to the probably cause that they
eventually will bring to a judge.  This is non-trivial, but not *that*
hard either, and in some cases it may well be trivial.  And when LEA
get caught doing this nothing terribly bad happens to LEA (no officers
go to prison, for example).  But when the *NSA* does this the risk of
method information leaking to the public is very large, which is one
reason to prefer that PRISM-type projects, if they exist at all, be
and remain forever secret -- their own secrecy is the best and
strongest (though even then, not fail-safe) guaranty of non-use for
criminal investigations.

Ironic, no?  We should almost wish we'd never found out.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] post-PRISM boom in secure communications (WAS skype backdoor confirmation)

2013-07-01 Thread Nico Williams
On Mon, Jul 1, 2013 at 9:05 AM, Eugen Leitl eu...@leitl.org wrote:
 On Mon, Jul 01, 2013 at 01:31:51PM +0200, Guido Witmond wrote:

 The only answer is to take key management out of the users' hands. And
 do it automatically as part of the work flow.

 You need at least a Big Fat Warning when the new fingerprint
 differs from the cached one, and it's not just expired.

OTR's model should suffice.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is the NSA now a civilian intelligence agency? (Was: Re: Snowden: Fabricating Digital Keys?)

2013-07-01 Thread Nico Williams
On Mon, Jul 1, 2013 at 4:57 PM, grarpamp grarp...@gmail.com wrote:
 And when LEA
 get caught doing this nothing terribly bad happens to LEA (no officers
 go to prison, for example).

 It is often in the interest/whim of the executive to decline to
 prosecute its own,
 even if only to save embarassment, so many of these cases will never see a 
 jury.
 That's why you need citizen prosecutors who can bring cases before both grand
 and final jury. For example, how many times have you seen a LE vehicle failing
 to signal, speeding/reckless, with broken running lights, etc... now
 try to criminally
 (not administratively) prosecute that just as you might be prosecuted for 
 same.

I'd love to see proposals for how to criminal prosecutions by the
public would work.

 their own secrecy is the best and
 strongest (though even then, not fail-safe) guaranty of non-use for
 criminal investigations.

 Didn't the requisite construction of plausible paths from tainted seed just
 get covered. So, No! The only guaranty against secret taint is transparency.
 Try removing the 'non-' next time.

Sometimes it's easy to cover up, sometimes it's not.  If you look at
how the Allies used their cryptanalytic breaks in WWII you'll see that
they made sparing use of their sigint obtained that way -- they had to
be very careful when to act and when not to act on it, and when they
did they had to take extra steps to make the enemy to believe other
avenues to be plausible.

Transparency is nice, but the thing is: I don't think you can keep a
PRISM-like system secure from being abused by analysts and sysadmins,
much less by political appointees, and I think it's harder still to
pull that off if its existence is public knowledge.  Whereas the
incentive to keep the secret from spilling is so strong that it should
act as a moderator on its operators.  That incentive is lost once the
program is public, and then transparency isn't enough: there's always
going to be ways to game the controls, and those controls will never
be as strong as the need to keep the program secret had been.

I could be wrong though.  It might well be that in practice there's no
difference between abuse potential when the program was secret vs. now
that it's public, in which case it's clearly better that it be known
to the public.  But my instinct tells me otherwise, and that's not a
defense of the program, just... paradoxical, ironic.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Snowden: Fabricating Digital Keys?

2013-06-28 Thread Nico Williams
On Tue, Jun 25, 2013 at 6:01 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
How would one fabricate a digital key?

They probably meant something that sounds close.  E.g., minted a
certificate, or a ticket, or token, or whatever the thing is, by
subverting an issuing authority or its processes (possibly via social
engineering).

It's not like there are many people outside [a very small part of] the
tech industry who'd understand what was said or meant (or meant to be
said), or even what actually happened.  What does it matter if a
journalist writes digital key when perhaps what they heard was
digital certificate followed by a brief, overly simplified
explanation of PKI concepts?  We're not the audience, and the public
won't know the difference -- it''s all gibberish unless analogized to
off-line concepts.

I don't think there's any chance that Snowden broke a public key
algorithm in use at the NSA -- there's always an easier path,
particularly for a well-placed insider.

Insiders are usually the biggest threat to any organization.  There
isn't much you can do about them except limit the scope of damage they
may cause (e.g., by limiting the size of the data collection they may
access, by, e.g., not being such a large organization).

 He used his root access to get into other people's accounts.

Depending on how careless the others are one might not even need root.
 It can be very easy to escalate privilege when people are careless.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-23 Thread Nico Williams
On Mon, May 20, 2013 at 1:50 PM, Mark Seiden m...@seiden.com wrote:
 On May 20, 2013, at 1:18 PM, Nico Williams n...@cryptonector.com wrote:
 Corporations are privacy freaks.  I've worked or consulted for a
 number of corporations that were/are extremely concerned about data
 exfiltration.

 this is completely dependent on context -- the kind of company, the 
 communicants involved,
 the regulatory environment, the material being conveyed.   the variability is 
 about as high as
 for natural persons, i reckon.

Yes, but there's always a need for privacy protection, and it's always
well-justified and reasonable.  And it's common to default to privacy
protection.

 particularly in financial services, firms try to record and retain all of the 
 communication with
 their customers in any channel.  if they can't record it, they don't want to 
 hear it (e.g. trading
 instructions sent via IM…)

Recording is one thing, but those recordings still need privacy
protection.  Customer data is treasured.

 I'd not advise such corporations to use Skype without an agreement
 with Skype as to what can/does happen to the their data, or else to be
 very careful about what is exchanged over Skype.  And it does happen
 that sometimes a corporation's employees need to communicate with
 people over Skype or similar *external* systems.


 you can advise whatever you fancy, but skype, google, microsoft are unlikely
 to agree to any such thing unless your client is a Really Big company who
 pays them a lot of money.  and why should they even bother their lawyers?
 pretty much, their service Is What it Is, take it or leave it.

Contracts are contracts.  Especially if you pay for a service and
privacy protection is stipulated, then the service provider has civil
liability.  And if you have the pocket depth for a lawsuit you have a
good chance of getting said privacy protection, though not likely in
relation to LEA (that depends on applicable laws and how much LEA
respects them).

 of course, your clients are free to use some other service that provides what 
 they're looking for
 or… do it themselves, which gives them total control and the high costs that 
 go with that.

Correct.  But it's not always easy.  People can write their own mobile
apps, but that's expensive, and you still get to concern yourself with
whether the device vendor can MITM you through the app store.
Fortunately HTML5 is making as-good-as-native apps possible for
mobiles.

 Beyond corporations, individuals absolutely have a right to private
 communications with their lawyers, etc...  And there need not be any
 criminal or civil liability for an individual to hide.  For example,
 if I were trying to patent something, I'd want my communications with
 my lawyer kept secret.


 oh, have you looked into how your lawyer receives your email?  probably they 
 host
 with the likes of google or some other outsourcer, because they're in the 
 business of law, not IT.

I'm aware.  I send sensitive documents to them via other methods, or
encrypted over e-mail and then give them the passphrase out of band.

 do you use how they receive their email as a criterion for how you choose 
 your patent lawyer?

No.  I assume e-mail is public and refrain from sending sensitive
information that way.

 last time i looked, the ABA does not require anything unusual, such as 
 encryption, for privileged
 communcation.

That's because there's no real, workable e-mail encryption solution,
not one that lawyers and their typical clients can use easily.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-20 Thread Nico Williams
On Fri, May 17, 2013 at 6:06 AM, Ben Laurie b...@links.org wrote:
 On 17 May 2013 11:39,  d...@geer.org wrote:
 Trust but verify is dead.

 Maybe for s/w, but not everything:
 http://www.links.org/files/CertificateTransparencyVersion2.1a.pdf

Which requires s/w.  Infinite loop detected.

:)

More seriously, we can't detect all backdoors before using the
software, but at least we can fix the ones we find if we have
suitably-licensed source.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-20 Thread Nico Williams
On Mon, May 20, 2013 at 12:08 PM, Mark Seiden m...@seiden.com wrote:
 any mechanism to do this (that i could think of, anyway) presents a possible 
 risk to
 those communicants who want no attributable state saved about their 
 communication.
 either these are privacy freaks (not intended pejoratively:  for whatever 
 reason, they're
 entitled to be…) …  or criminals.

Corporations are privacy freaks.  I've worked or consulted for a
number of corporations that were/are extremely concerned about data
exfiltration.

I'd not advise such corporations to use Skype without an agreement
with Skype as to what can/does happen to the their data, or else to be
very careful about what is exchanged over Skype.  And it does happen
that sometimes a corporation's employees need to communicate with
people over Skype or similar *external* systems.

Beyond corporations, individuals absolutely have a right to private
communications with their lawyers, etc...  And there need not be any
criminal or civil liability for an individual to hide.  For example,
if I were trying to patent something, I'd want my communications with
my lawyer kept secret.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-20 Thread Nico Williams
On Mon, May 20, 2013 at 12:22 PM, Jeffrey Walton noloa...@gmail.com wrote:
 The original Skype homepage (circa 2003/2004) claims the service is
 secure: Skype calls have excellent sound quality and are highly
 secure with end-to-end encryption.
 (http://web.archive.org/web/20040701004241/http://skype.com/).

Secure in what way though?  Probably: relative to passive
eavesdroppers.  As for LEA, forget it.  (Nothing is secure w.r.t. LEA
that have jurisdiction, as ultimately there's the rubber hose.)

 The new web page does not even use the word
 (web.archive.org/web/20130426221613/http://www.skype.com/).

So their advertising/terms changed.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Validating cryptographic protocols

2013-05-01 Thread Nico Williams
On Wed, May 1, 2013 at 9:50 AM, Florian Weimer f...@deneb.enyo.de wrote:
 I've recently been asked to comment on a key exchange protocol which
 uses symmetric cryptography and a mutually trusted third party.  The
 obvious recommendation is to copy the Kerberos protocol (perhaps with
 updated cryptographic primitives), but let's assume that's not
 feasible for some reason.

Kerberos has a few flaws, mostly with trivial effects or which have
been fixed subsequently.  Most, if not all of these flaws are about
unauthenticated plaintext: the Ticket in the KDC-REP, for example, but
also PA-DATA in KDC-REP, and KRB-ERROR in cases where the error can be
authenticated because a session key could be established.  FAST
(RFC6113) fixes these issues, except for KRB-ERROR in AP exchanges,
but it's not as elegant as it could have been if Kerberos had not had
these problems from the word go.

Another problem is that all of the cross-realm work should preferably
be done by the client principal's KDC as an option to keep clients
simple.  (This at some costs in policy that can be expressed, or how
to express and deploy it.)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Validating cryptographic protocols

2013-05-01 Thread Nico Williams
To complete the thought I meant to... don't just copy Kerberos.  Copy
the fixes, and fold them in better.

Regarding crypto primitives, as Jeff Altman points out, the Kerberos
ones have been separated out from Kerberos.  See RFC 3961 and 3962.
Note that for AES in particular Kerberos relies on ciphertext stealing
mode, which is actually quite a pain to work with if you have hardware
with high operation overhead.  Counter-based modes could work equally
way, but much care is needed to keep the likelihood of key+counter
reuse near or at zero.

If you're building a GSS-API mechanism at all just steal the Kerberos
mechanism's per-token message protocol (as several mechanisms have
done).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ICIJ's project - comment on cryptography tools

2013-04-05 Thread Nico Williams
On Fri, Apr 5, 2013 at 9:17 PM, NgPS n...@rulemaker.net wrote:
 In the movies and presumably in real life, bad guys have smart crooked
 lawyers advising them. Surely the bad guys have the resources to set up
 bunch of servers a la iMessage/Whatsapp, and write/deploy their own apps on
 their mobile devices, running stripped-down custom ROMs, to communicate via
 these servers, to avoid 3rd party MITM. Don't even need crooked developers,
 just advertise on Hacker News and whole bunch of hackers will jump on it.

It'd be nice (for good guys certainly) to be able to open-code
everything that one needs, or otherwise review all of the source code
to the object code that one needs.  In practice you cannot do this.
It's ETOOMUCH.

In the worst case scenario for the LEA there's still traffic analysis
and warrants/court orders/rubber hoses that they can resort to.

Crypto only helps the good guys w.r.t. bad guys and other governments
(and then only sometimes); crypto is just a polite way of saying try
harder, get a warrant to the LEA with jurisdiction over you (or your
devices).  For LEA my guess is that the biggest problem isn't how to
get at evidence, but how to know who the bad guys are: in a sea of
traffic it's hard to tell when you don't even know what's needles and
what's hay, which must be why LEA tend to have such a dislike for good
guy crypto.  We hope the NSA types haven't forgotten that good guys
need crypto, whether LEA like it or not.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ICIJ's project - comment on cryptography tools

2013-04-04 Thread Nico Williams
On Thu, Apr 4, 2013 at 3:51 PM, ianG i...@iang.org wrote:
 On 4/04/13 21:43 PM, Jon Callas wrote:
 This is great. It just drives home that usability is all.

 Just to underline Jon's message for y'all, they should have waited for
 iMessage:

   Encryption used in Apple's iMessage chat service has stymied attempts
 by federal drug enforcement agents to eavesdrop on suspects' conversations,
 an internal government document reveals.

[...]

But note that this doesn't mean that iMessage can't be MITMed or
otherwise be made susceptible (if it isn't already) to MITM attacks or
plain traffic analysis.

iMessage relies on Apple as a trusted third-party.  Therefore Apple
can MITM its users.  The best case scenario is that the iMessage
clients can add jey pinning to force the TTP to either never MITM or
always MITM any pair of peers.  But since the TTP also distributes the
client software...

Online we have lots of security problems that are difficult to
resolve, from physical security of devices (there's not enough) to the
lack and general difficulty/impossibility of reliably open-coding or
reviewing everything that one has to trust (mostly software, and some
firmware too).

Basically, this is complaint by the DEA is disinformation or
misinformation (or both!).  If the former case we might even be
staring at the start of a new crypto wars period.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Nico Williams
On Thu, Mar 28, 2013 at 7:24 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:
 On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 [Rational response elided.]

 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds. Inevitably,
 it cost them a boatload too. We can only hope that Apple
 and others learn from these mistakes.

RSA did it for plausible, reasonable (if wrong) ostensible reasons not
related to LEA.

 OTOH, if Apple thought they could make a hefty profit by

There is zero chance Apple would be backdooring anything for profit
considering the enormity of the risk they would be taking.  If they do
it at all it's because they've been given no choice (ditto their
competitors).

 selling to LEAs or friendly governments, that might change
 the equation enough to tempt them. Of course that's doubtful
 though, but stranger things have happened.

This the tin-foil response.  But note that the more examples of
bad-idea backdoors, the less confidence we can have in the rational
argument, and the more the tin-foil argument becomes the rational one.
 In the worst case scenario we can't trust much of anything and we
can't open-code everything either.  But in the worst case scenario
we're also mightily vulnerable to attack from bad guys.  Let us hope
that there are enough rational people at or alongside LEAs to temper
the would-be arm-twisters that surely must exist within those LEAs.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] why did OTR succeed in IM?

2013-03-23 Thread Nico Williams
On Saturday, March 23, 2013, ianG wrote:

 Someone on another list asked an interesting question:

  Why did OTR succeed in IM systems, where OpenPGP and x.509 did not?


Because it turns out that starting with anonymous key exchange is good
enough in many cases.  Leap of faith would have been a good addition, but
would have created device sync issues, and the answer/question
authentication is good enough.  Imagine if we'd insisted on a PKI for IM...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Nico Williams
On Mon, Feb 11, 2013 at 4:45 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 There have been attacks on SSH based on the fact that portions of the packets
 aren't authenticated, and as soon as the TLS folks stop bikeshedding and adopt
 encrypt-then-MAC I'm going to propose the same thing for SSH, it's such a
 no-brainer it should have been adopted years ago when the first attacks popped
 up.

No need, just deprecate the CBC ciphers from SSHv2 and be done.  We do
have counter-mode replacements.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Nico Williams
On Mon, Feb 11, 2013 at 4:57 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 Nico Williams n...@cryptonector.com writes:
On Mon, Feb 11, 2013 at 4:45 PM, Peter Gutmann pgut...@cs.auckland.ac.nz 
wrote:
 There have been attacks on SSH based on the fact that portions of the 
 packets
 aren't authenticated, and as soon as the TLS folks stop bikeshedding and 
 adopt
 encrypt-then-MAC I'm going to propose the same thing for SSH, it's such a
 no-brainer it should have been adopted years ago when the first attacks 
 popped
 up.

No need, just deprecate the CBC ciphers from SSHv2 and be done.  We do have
counter-mode replacements.

 How does counter-mode stop manipulation of the encrypted metadata at the start
 of the SSH packet, which is what previous attacks have targeted?

Oh, well, I was thinking of padding -- there's no padding in the
counter mode cases, but you're right that we should just always
encrypt-then-MAC.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Nico Williams
On Mon, Feb 11, 2013 at 6:04 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 Nico Williams n...@cryptonector.com writes:

I'd go further: this could be the start of the end of the cipher suite
cartesian product nonsense in TLS.  Just negotiate {cipher, mode} and key
exchange separately, or possibly cipher, mode, and key exchange, in just the
same way as you propose negotiation of encrypt-then-MAC.

 Nonononono, we learned from the IKE mess that the Chinese-menu approach is
 vastly worse than the cipher-suite one.  TLS has already tried the
 Chinese-menu approach to algorithms in TLS 1.2's ECC stuff, and it's at least
 as big a mess as IKE was (well, OK, I don't think anything can quite reach the
 IKE level, but it's getting there), which is why I had to write this:

SSHv2 has a this approach and it has not been a disaster there.
What's the issue exactly?  ECC curve parameters?  Something else?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Nico Williams
On Mon, Feb 11, 2013 at 6:23 PM, Stephen Farrell
stephen.farr...@cs.tcd.ie wrote:
 On 02/12/2013 12:04 AM, Peter Gutmann wrote:
 The problem with the cipher-suite explosion is that people want to throw in
 vast numbers of pointless vanity suites and algorithms that no-one will ever
 use

 On balance I think the ciphersuite approach is slightly better
 at being a slight counter to inevitable feature/cipher creep.

The ability to give an answer like we can't add your vanity cipher
suite because of cartesian explosion seems like a weak justification
for the cartesian explosion approach.

If we want a policy of limiting what cipher suites we allocate
codepoints to then we should have an *explicit* policy, and we should
not wimp out when it comes time to enforcing it.

But I don't think we have such a policy at the IETF.  The IETF policy
regarding vanity cipher suites can be described as following some
sturm und drang you'll get to have it, but only as OPTIONAL, described
in an Informational RFC.

Given the de facto policy at the IETF cartesian explosion is just
silly -- so much foot self shooting.  Let's stop.

 It does at least cause people to pause when they are about to
 ask for another 96 ciphersuites as happened with certicom.

 I also agree that only a very very few of the 320 or so TLS
 ciphersuites are useful. The rest are just a PITA as far as I
 can see. (Yes, 320. Sigh. [1])

So how well did cartesian explosion work as an implicit anti-vanity
cipher suite policy work then?  Not very well, evidently!  :)

But I suspect that that was not the rationale way, way back when, back
when cartesian explosion was selected.  The vanity cipher suite
disincentive rationalization strikes me as a post-hoc one, and it
doesn't work anyways.

Please, let's go for an a-la-carte system.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Nico Williams
On Mon, Feb 11, 2013 at 7:00 PM, Stephen Farrell
stephen.farr...@cs.tcd.ie wrote:
 On 02/12/2013 12:42 AM, Nico Williams wrote:
 On Mon, Feb 11, 2013 at 6:23 PM, Stephen Farrell
 stephen.farr...@cs.tcd.ie wrote:
 But I suspect that that was not the rationale way, way back when, back
 when cartesian explosion was selected.  The vanity cipher suite
 disincentive rationalization strikes me as a post-hoc one, and it
 doesn't work anyways.

 Its not a rationalization. I said its slightly less bad not
 that's why it was done. I wasn't involved in SSL when that
 decision was made and have had little involvement in TLS at
 all really.

But it sounds like you (and Peter) are arguing for keeping cartesian
explosion on these grounds.  We know that didn't work though.  And we
should absolutely not want to keep some design because it might help
(though it doesn't) enforce a policy that we don't have (but some
want).

Really, if we have no good reason to keep cartesian products for
cipher suites then we ought to switch to a la carte (with some
reasonable constraints, of course).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Isn't it odd that...

2013-01-29 Thread Nico Williams
On Tue, Jan 29, 2013 at 9:40 PM, Thor Lancelot Simon t...@panix.com wrote:
 ...despite all the attacks we've seen on compresion-before-encryption, and 
 all the timing
 atatacks we've seen on encryption, [...]

 ..we haven't really seen any known-plaintext key recovery attacks facilitated 
 by timing
 analysis of compressors applied prior to encryption?

Yup!  It is.  But as you reason, compression must leak some data
through timing (and power) side channels.

BTW, it's not compression before encryption that's the problem -as if
we could compress after encryption instead :)- but compression without
discrimination, often because compression occurs at layers that don't
know what to compress.  Compression in SSH, TLS, IPsec -- all bad.
Compression at the app layer can be OK.  Sending compressed image
files is fine, say, but compressing everything is not.

FYI, in the HTTPbis WG they are considering using forms of stateful
compression (hop-by-hop) for HTTP/2.0 so that things that repeat
frequently in HTTP traffic can be compressed safely, like cookies and
URL prefixes.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

Hooks shouldn't modify the commit, just accept or reject.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
On Tue, Jan 8, 2013 at 11:08 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, Jan 8, 2013 at 9:30 PM, Nico Williams n...@cryptonector.com wrote:
 On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Hooks shouldn't modify the commit, just accept or reject.
 Thanks Nico.

 Out of curiosity: what does one typically do when there's a standard
 policy to enforce? I [personally] would not reject a check-in for
 whitespace (I would reject for many other reasons, though - such as
 CompSci 101 omissions).

A number of projects I've worked on -particularly Solaris, but not
only- absolutely reject pushes of code (and docs, and tests, and build
goop) that fails style and other tests.  Some even go so far as to
trigger an incremental build to check that all is ok (but rarely is
this done synchronously, and so build failures - email, possible
backout, ...).

 Perhaps allow the check-in to proceed unmolested, and then have a
 second process run after the commit to perform policy enforcement (for
 example, whitespace or coding style). In this scenario, would the
 second process perform a second commit?

Fast checks should be done synchronously and failure - push rejection.

Slow checks should be done asynchronously and failure - nastygram.

Slow check failures can be corrected in either of two ways: backout
(mostly to be avoided, except when nearing releases or build
milestones) or subsequent push to fix the issues.

The more you can check quickly, the better:

 - *style (C style, Java style, JS style, ...)
 - referential integrity / software engineering process (commit
references bug report, bug report is in correct state, if the bug
report indicates that the fix should have docs impact then check that
docs are updated, check that codereview has happened and the code has
been signed off, or perhaps that code reviewers are listed, ...)

Slower checks:

 - build
 - static bug analysis (including, for languages that need it, *lint)
 - tests

Do all this, and your life will be easier.  Every hour you put into
writing checkers of this sort will pay for itself many times over for
any sufficiently large project.

Some such checkers can easily be found by searching around.  The
Solaris gate checks are, IIRC, in OpenSolaris and derivatives, for
example, but I've seen others.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
And, of course, *all* the gate checkers need to be available to the
developer, so *they* can run them first.  No trial and error please.

(One quickly learns to code in the target upstream's style and other
requirements.)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Just how bad is OpenSSL ?

2012-11-04 Thread Nico Williams
On Sun, Nov 4, 2012 at 8:42 AM, Ben Laurie b...@links.org wrote:
 On Sat, Nov 3, 2012 at 12:26 AM, James A. Donald jam...@echeque.com wrote:
 On Oct 30, 2012 7:50 AM, Ben Laurie b...@links.org wrote:
 The team has ruled out having the master at github.

 What is wrong with github?

 TBH, I wouldn't mind much, but I think the concern is that its not
 under our control.

It's just git, so keep multiple clone repos.  You could use an
internal one as the master and push updates to the github one if you
don't trust github -- use github to serve outsiders.  Really, what
matters is that you have one master repo and all other official repos
be read-only clones of it.  As with any master/slave failover/takeover
scheme you can always recover from the death of the master by
promoting a clone to master status.  So why not trust github?  Because
they've been hacked?  But if you keep multiple clones and people keep
private clones then you depend on git's use of SHA-1 Merkle hash trees
for security.  Or, if you want *private* repos, then you must either
run your own git servers or pay a github or gitorious.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Just how bad is OpenSSL ?

2012-10-30 Thread Nico Williams
I strongly suggest you move to git ASAP.  It's not hard, though some
history can be lost in the move using off-the-shelf conversion tools.
(MIT Kerberos recently moved from SVN to git, and before that, from
CVS to SVN, and they seem to have done a lot of manual cleanup to
avoid some losses of history.  You might want to talk to them if this
is a problem for you, though, frankly, I think it shouldn't be, after
all you can still keep CVS around for archeology...)

That would be a great first step towards making contributions easier,
since then patches can be posted in the form of git branches, pull
requests, and formatted patches e-mailed or attached to RT.  And
refreshing older patches would be much easier too.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Secure Remote Password (SRP) and Plaintext Emil Address

2012-10-18 Thread Nico Williams
On Thu, Oct 18, 2012 at 7:52 PM, Jeffrey Walton noloa...@gmail.com wrote:
 I have a Secure Remote Password (SRP) implementation that went through
 a pen test. The testers provided a critical finding - the email
 address was sent in the plaintext. Noe that plaintext email addresses
 are part of the protocol.

That's just how SRP and any ZKPP protocol must work.  The shared
secret or verifier for it must be identified somehow, and this has to
be done before the client and server have exchanged session keys as
part of the ZKPP, and that identification can at best be only
pseudonymous.

Now, you *could* run SRP in a TLS (or equivalent) tunnel.  Then you'd
get privacy protection for the *client* ID from passive attackers, and
if you authenticate the server as well (in TLS) then you get privacy
protection relative to active attackers.

But if you'd run SRP over TLS with TLS authenticating the server...
then why bother with SRP or any ZKPP?

Privacy protection for the client ID can be worked into a ZKPP in such
a way as to save round trips relative to running a plain ZKPP over
some other channel, but it will add a round trip relative to a plain
ZKPP, and there are UI considerations: how do we authenticate the
server??  with PKI?  PKI will generally imply a fallback on those
pesky give-it-away dialogs.

 I'm not really convinced that using an email address in the plaintext
 for the SRP protocol is finding-worthy, considering email addresses
 are public information. And I'm very skeptical that its a critical
 finding.

It... depends.  If you need privacy protection for the client ID then
you need it, no?  I can't tell you if you do.  You must decide this.
For most applications I think privacy protection for the client ID is
not really necessary.

 With that said, what are the options here? I was thinking a simple
 mask function, which would remove the plaintext-ness (but not add
 any security to the system). Heuristically, masking the email address
 is *not* less secure than sending the email in the plaintext.

A mask?  Sounds like security through obscurity.  You could hash the
address, but that just gets you pseudonymity (at best: the attackers
could mount a dictionary attack to recover the address, and they can
trivially check if a hash corresponds to a given address).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Secure Remote Password (SRP) and Plaintext Emil Address

2012-10-18 Thread Nico Williams
On Thu, Oct 18, 2012 at 8:36 PM, Nico Williams n...@cryptonector.com wrote:
 On Thu, Oct 18, 2012 at 7:52 PM, Jeffrey Walton noloa...@gmail.com wrote:
 I'm not really convinced that using an email address in the plaintext
 for the SRP protocol is finding-worthy, considering email addresses
 are public information. And I'm very skeptical that its a critical
 finding.

 It... depends.  If you need privacy protection for the client ID then
 you need it, no?  I can't tell you if you do.  You must decide this.
 For most applications I think privacy protection for the client ID is
 not really necessary.

I should have added that this sort of finding from a pen tester (or
any type of audit) is just that.  You generally get to decide that you
don't need the missing feature (privacy prot. for the client ID) in
this or that case.

That said, my advice would be to hash IDs if you can: it gets you a
modicum of privacy protection, and if it's cheap enough then
additional protection is worth having.

Lack of client ID privacy protection can lead to some attacks such as
password guesses based on the ID or knowledge of the person that ID is
for.  If you were working for a spy agency (say), you'd definitely
want priv. prot. for the client ID!

So you get to decide what level of protection you want for the client ID:

 - none
 - pseudonymous (hash the IDs)
 - privacy protection relative to passive attackers (run over a TLS
channel with anon DH cipher suites)
 - privacy protection relative to passive and active attackers (run
over a TLS channel with server cert)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Secure Remote Password (SRP) and Plaintext Emil Address

2012-10-18 Thread Nico Williams
On Thu, Oct 18, 2012 at 9:40 PM, Jeffrey Walton noloa...@gmail.com wrote:
 I think Hash(email) or a UID (rather than email address) is the best
 course of action.

UID doesn't work: the user must then remember it, and you don't want
to burden them with that :(

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ZFS dedup? hashes (Re: [zfs] SHA-3 winner announced)

2012-10-03 Thread Nico Williams
On Wed, Oct 3, 2012 at 9:19 AM, Dr Adam Back a...@cypherspace.org wrote:
 Incidentally a somewhat related problem with dedup (probably more in cloud
 storage than local dedup of storage) is that the dedup function itself can
 lead to the confirmation or even decryption of documents with
 sufficiently low entropy as the attacker can induce you to store or
 directly query the dedup service looking for all possible documents.  eg say
 a form letter where the only blanks to fill in are the name (known
 suspected) and a figure (1,000,000 possible values).

 Also if there is encryption there are privacy and security leaks arising
 from doing dedup based on plaintext.

Compression at lower layers tends to leak.  We've seen this in VOIP,
and now CRIME.  Dedup is a compression function running at a lower
layer (i.e., lower than the application writing the file contents).
Of course, dedup is not a compression function that is easily applied
at the application layer, so if you really need dedup, then you need
it at lower layers.  The question is: do you need dedup and
confidentiality protection for the same data?  I think most would
answer no.

 And if you are doing dedup on ciphertext (or the data is not encrypted), you
 could follow David's suggestion of HMAC-SHA1 or the various AES-MACs.  In
 fact I would suggest for encrypted data, you really NEED to base dedup on
 MACs and NOT hashes or you leak and risk bruteforce decryption of
 plaintext by hash brute-forcing the non-encrypted dedup tokens.

Encrypted ZFS hashes and authenticates ciphertext.  The attacker is
presumed to observe all on-disk data, including ciphertext, block
pointers (which contain authentication tags and hashes), ...  The
attacker can observe dups as well as ZFS, and can attempt passive and
active attacks.  Dedup certainly adds to the attacker's traffic
analysis capabilities, but also to the attacker's active attack
capabilities (e.g., if the attacker can mount a chosen plaintext
attack).  Note that encrypted ZFS can only dedup within sets of
datasets that share the same keys.

What difference does it make if dedup uses an authentication tag or a
hash of ciphertext?  Assuming no collisions anyways, and if dups are
verified then collisions make little difference as far as dedup is
concerned.  I think the harm is done first by compressing and
encrypting at a layer lower than the application; encryption can be
done at lower layers, but compression is best left to the application
layer.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [zfs] SHA-3 winner announced

2012-10-03 Thread Nico Williams
On Wed, Oct 3, 2012 at 7:41 AM, David McGrew (mcgrew) mcg...@cisco.com wrote:
 Are the requirements for the security of ZFS and the use of cryptography
 in that filesystem documented anywhere?
 https://blogs.oracle.com/bonwick/entry/zfs_end_to_end_data mentions a
 Merkle tree of checksums, where the checksum function can be either
 Fletcher or SHA-256.  A collision-resistant hash of an entire system is
 indispensable if asymmetric authentication is needed, but are there common
 scenarios where that is needed?   If encryption is used in ZFS, then there
 is necessarily a symmetric encryption key that is being managed; why not
 use symmetric message authentication as well, and take advantage of the
 performance gain?

Encrypted ZFS has a requirement that it must be possible to check pool
integrity without having access to the keys.  This means that even if
encrypted ZFS used MACs (it does) it still needs to hash ciphertext in
a Merkle hash tree fashion for the purpose of un-keyed integrity
checking.  Since a MAC is also used I think one could argue that the
hash function needn't be all that strong: it's primarily needed for
error detection.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] abstract: Air to Ground Quantum Key Distribution

2012-09-18 Thread Nico Williams
On Tue, Sep 18, 2012 at 10:30 AM, Natanael natanae...@gmail.com wrote:
 Does anybody here take quantum crypto seriously? Just wondering. I do not
 see any benefit over classical methods. If one trusts the entire link and
 knows it's not MitM'd in advance, what advantage if any does quantum key
 distribution have over ordinary methods? And isn't it just as useless
 otherwise as the ordinary methods?

It's that time of the year again :)  Maybe we can save ourselves the
trouble (assuming there's really nothing new to add here, and I do
think there isn't) and just say read the archives.

Nico

PS: If you do read the archives you'll see I'm in the QKD is a
curiosity/novelty camp.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key extraction from tokens (RSA SecurID, etc) via padding attacks on PKCS#1v1.5

2012-07-05 Thread Nico Williams
On Thu, Jul 5, 2012 at 9:17 AM, Martin Paljak mar...@martinpaljak.net wrote:
 On Tue, Jul 3, 2012 at 1:56 AM, Michael Nelson nelson_mi...@yahoo.com wrote:
 It also does not matter whether you are using pkcs11 APIs, and whether you 
 are doing key wrap/unwrap, and whether the data is a key.  Any secret piece 
 of data encrypted under an RSA cert can be potentially extracted, via any 
 kind of crypto module, as long as the module will use the deprecated padding 
 mechanism.

 That's a very broad claim. I guess nobody has questioned the fact that
 the authors of the paper optimized a long-known weakness to become
 useful, *if the conditions are right*.
 Like uncontrolled access to C_UnwrapKey or C_Decrypt (in terms of
 PKCS#11, as this is what the authors are using).

 It all works, if the module functions as an oracle that can be
 exploited by the adversary. I don't know the SecureID token, but I do
 know some other tokens described in the paper. Any reasonable token
 would do owner PIN verification before trying to decrypt.

Access controls are a mitigation.  There is no guarantee that the
attacker doesn't have access.  Note that if the attacker does have
access they still have incentive to extract the actual keys: so they
can continue to use them even if they lose access, and so they can
avoid auditing facilities on the HSM/token.  Mitigations do not
detract from the cryptanalytic result.  It's time to stop using weak
padding signature algs.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-06-07 Thread Nico Williams
On Thu, Jun 7, 2012 at 4:14 PM, Steven Bellovin s...@cs.columbia.edu wrote:
 There's another, completely different issue: does the attacker want a 
 particular password, or will any passwords from a large set suffice?

 Given the availability of cheap cloud computing, botnets, GPUs, and botnets 
 with GPUs, Aa * Ah * Ap can be very, very high, i.e., the attacker has a 
 strong advantage when attacking a particular password.  Some say that it's so 
 high that increasing Ad is essentially meaningless.  On the other hand, if 
 there are many passwords in the set being attacked, a large Ad translates 
 into a reduction in the fraction that can be attack in any given time frame.

If the attacker can't easily identify the user IDs...  If usernames
are put through a PBKDF as well to generate the lookup key with which
to find the password verifier, how much does the defender gain?  For
any one password, not much, because there's less entropy in usernames
than passwords, so the Ad barely improves -- but if the attacker can't
identify that one password then the slight increase in Ad helps slow
the attacker's progress through all of the verifiers they have.
Moreover, the verifier DB could be peppered with chaff with which to
further slow down the attacker.  Does this make sense?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Nico Williams
On Thu, May 31, 2012 at 2:03 AM, Jon Callas j...@callas.org wrote:
 On May 30, 2012, at 4:28 AM, Maarten Billemont wrote:

 If I understand your point correctly, you're telling me that while scrypt 
 might delay brute-force attacks on a user's master password, it's not 
 terribly useful a defense against someone building a rainbow table.  
 Furthermore, you're of the opinion that the delay that scrypt introduces 
 isn't very valuable and I should just simplify the solution with a hash 
 function that's better trusted and more reliable.

 Tests on my local machine (a MacBook Pro) indicate that scrypt can generate 
 10 hashes per second with its current configuration while SHA-1 can generate 
 about 1570733.  This doesn't quite seem like a trivial delay, assuming 
 rainbow tables are off the... table.  Though I certainly wish to see and 
 understand your point of view.

 My real advice, as in what I would do (and have done) is to run PBKDF2 with 
 something like SHA1 or SHA-256 HMAC and an absurd number of iterations, 
 enough to take one to two seconds on your MBP, which would be longer on ARM. 
 There is a good reason to pick SHA1 here over SHA256 and that is that the 
 time differential will be more predictable.

If you'll advise the use of compute-hard PBKDFs why not also memory
hard PBKDFs?  Forget scrypt if you don't like it.  But the underlying
idea in scrypt is simple: run one PBKDF instance to generate a key for
a PRF, then generate so many megabytes of pseudo-random data from that
PRF, then use another PBKDF+PRF instance to generate indices into the
output of the first, then finally apply a KDF (possibly a simple hash
function) to the output of the second pass to generate the final
output.  The use of a PRF to index the output of another PRF is
simple, and a simple and wonderful way to construct a memory-hard
PBKDF.

Meory and comput hardness in a PBKDF is surely to be desired.  It
makes it harder to optimize hardware for fast computation of rainbow
tables.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Nico Williams
On Thu, May 31, 2012 at 10:43 AM, Adam Back a...@cypherspace.org wrote:
 One quite generic argument I could suggest for being wary of scrypt would be
 if someone said, hey here's my new hash function, use it instead of SHA1,
 its better - you would and should very wary.  A lot of public review goes
 into finding a good hash algorithm.  (Yeah I know SHA1 has a chink in its
 armor now, but you get the point).

Yes, but note that one could address that with some assumptions, and
with some techniques that one would reject when making a better hash
-- the point is to be slow, so things that make a PBKDF slower are OK:

PBKDF2' = PBKDF2(P' = to_password(memory_hard(P, S, p)) +
to_password(PBKDF2(P, S, p)), S, p)

where P, S, and p are the password, salt and PBKDF parameters,
to_password() generates a password from a key, and + concatenates
strings.

No one would build an H' from H that way.  But for a PBKDF it seems
sensible (but see below).

Can PBKDF2' be weaker than PBKDF2?  As long as PBKDF2 does not throw
away any entropy, and as long as knowing one portion of the password
(say, if the memory_hard function turns out to be weak) is not enough
to guess the remainder from PBKDF2's output, then I think the answer
has to be no.  Now, I'm making assumptions here.  Clearly PBKDF2 can
toss some entropy out, for example, so at least one of my two
assumptions is incorrect, but is it enough to wreck the security of my
PBKDF2' construction?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Nico Williams
On Thu, May 31, 2012 at 2:03 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 05/31/2012 11:28 AM, Nico Williams wrote:
 Yes, but note that one could address that with some assumptions, and
 with some techniques that one would reject when making a better hash
 -- the point is to be slow,

 [...]
 Starting with:

 Ep = password entropy in bits (chosen by the user)

 N =  iteration count (chosen by the defender)

For memory-hard PBKDFs you also need a memory size parameter, though
you might derive that from N.

 [...]
 The defender's advantage to having a work factor looks something like:

          N * 2**Ep
 Ad =  --
        Dd(N) * Aa * Ah * Ap

Nicely summarized.

 * If individual users can be shown to present a different Ep to the
 attacker, it could be beneficial to adjust N independently per user. For
 example a website might say to a user: Want to log in 2 seconds faster next
 time? Pick a longer password!

Nice.  But again, in practice this won't work.  You might think that
for a single master password users could get into the 48 bits of
entropy range, but probably not.

 Can PBKDF2' be weaker than PBKDF2?

 Yes.

Mine or any PBKDF2' in principle?

 It could turn out to result in an Aa or Ah significantly greater than 1.0.
 It could end up being usefully parallelizable so it can be evaluated more
 efficiently when testing many passwords at one time rather than one at a
 time. Even if it just reduced the performance of other things, say webserver
 software, sharing the defender's hardware it could be a step backwards.

I don't see how my PBKDF2' construction ends up being more
parallelizable than PBKDF2 such that PBKDF2' can run faster than
PBKDF2 -- you need to compute PBKDF2 before you can compute PBKDF2'
and I see no way around that.  If memory_hard() is not nearly as
memory-hard (and/or slow) as initially thought then PBKDF2' will not
help much, but it should still not hinder either.  Unless... unless N
is tuned down on the theory that memory_hard() adds strength -- if it
turns out not to then having turned N down will greatly help the
attacker.

 It could also be that memory_hard(P, S, p) introduces
 impractical-to-mitigate timing or other side channel attacks that leaked p.

Sure.  But let's assume that memory_hard() isn't awful.  It just isn't
yet accepted, and it might have some weaknesses.  I'd hope that by now
everyone pays attention to timing side channels (but power side
channels? not so much).  We can test for timing leaks.  We can't test
for as-yet undiscovered optimizations.

 Last I checked, PBKDF2 re-introduced all the starting entropy from P and S
 at every iteration. So it shouldn't lose any significant entropy.

Ah, yes, I wasn't sure, but now I see that it does.  It's possible
that the chosen PRF will toss a particular bit every time (e.g., the
high bit in every byte of the password), but let's assume it doesn't.

 If memory_hard did not take P and S and instead took
       memory_hard(PBKDF2(usage 2 + P, S, N))
 it might be easier to think about. We would have less to worry about if P
 were passed through a salted one way function before it were handed to
 memory_hard.

Sure, but I think you're making my point: there are things we could do
to make a PBKDF2' that is at least as strong as PBKDF2.  As long as we
don't respond to PBKDF2' by tuning down N I think that's possible,
trivially possible.  But the risk is probably quite high that N will
get turned down (it used to take 1s to login, now it takes 2s!-tune
down N so it takes 1s with the new PBKDF2').

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-30 Thread Nico Williams
On Wed, May 30, 2012 at 2:32 AM, Jon Callas j...@callas.org wrote:
 (1) You take the master password and run it through a 512-bit hash function, 
 producing master binary secret.

 You pick scrypt for your hash function, because you think burning time and 
 space adds to security. I do not. This is a place where gentlepersons can 
 disagree, and I really don't expect to convince you that SHA-512 or Skein 
 would be better options. I'm convinced that I know why you're doing it, and 
 it would be a waste of both our times to go further. We just disagree.

 At the end of it, it hardly matters because if an attacker wishes to 
 construct a rainbow table, the correct way to do it is to assemble a list of 
 likely passwords and just go from there. It will take longer if they use 
 scrypt than with a real hash function, but once it's done it is done. They 
 have the rainbow table.

This is why salting is important.  They should not be able to build a
single rainbow table that works for all cases.  They should have to
build a rainbow table per-user, but since that's wasteful (of storage)
they won't unless they are prepping to attack a single account at some
point when material suitable for attack becomes available.

Once you're salting the next step is to slow down the password search.
 We can't slow it down too much, else the user will suffer and
complain, so in the end the user had better pick password.  And if the
attacker is attacking a small number of users then they can build
rainbow tables, which means that... in general you're right.

Are you saying that PBKDFs are just so much cargo cult now?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-30 Thread Nico Williams
On Wed, May 30, 2012 at 3:25 PM, Maarten Billemont lhun...@lyndir.com wrote:
 I'm currently considering asking the user for their full name and using that 
 as a salt in the scrypt operation.  Full names are often lengthy and there's 
 a good deal of them.  Do you recon this might introduce enough entropy or 
 should I also be asking for the user's birth date?  I'm just thinking that 
 this is good information that will make for a wide enough range of different 
 salts that it will hopefully make rainbow tables too expensive while still 
 avoiding the problem that a user cannot remember any random salt of such 
 entropy.

My problem with your design is that the statelessness of it forces you
to depend on a really, really good master password, because otherwise
any site [to which the user gives a password generated this way] can
then mount an off-line dictionary attack on the user's master
password.  This means that the user needs to have such a strong
password that it's likely not practical.

PBKDFs with large work/memory factors are useful when the attacker has
to compromise some other part of the system in order to be able to
mount an off-line dictionary attack on the password.  A scheme that
exposes material suitable for attacking without any other protections
is weak.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DIAC: Directions in Authenticated Ciphers

2012-05-02 Thread Nico Williams
On Wed, May 2, 2012 at 8:00 PM, D. J. Bernstein d...@cr.yp.to wrote:
 I should emphasize that an authenticated-cipher competition would be
 much more than an AE mode competition. There are certainly people
 working on new ways to use AES, but there are many more people working
 on new authenticators, new block ciphers, new stream ciphers, new
 ciphers with built-in authentication mechanisms, etc.

A few years ago Schneier proposed a cipher called Helix that, while
broken, has some very interesting properties making it unlike any
other cipher or cipher more I'm aware of.

 Zooko Wilcox-O'Hearn writes:
 authenticated encryption can't satisfy any of my use cases!

 Of course it can! Evidently you to want to combine it with public-key
 signatures, which will render the secret-key authenticator useless, so
 for efficiency you'd like to suppress that authenticator. This doesn't
 work well with something like AES-OCB3, but it _does_ work well with
 something like AES-GCM, giving you AES-CTR.

Well, Zooko has an application that uses Merkle hash trees and really
wants to authenticate only the roots of the trees, with all the leaves
being encrypted without authentication.  I think that's a perfectly
fine design, assuming a strong enough hash primitive.  It *is* AE, in
a way, but it's not AE like GCM and it's intimately tied to
Tahoe-LAFS' on-disk format.  Git is very similar (though there's no
built-in head signature scheme, IIRC, but it's perfectly possible to
sign git hashes); git does use SHA-1, which is too weak for my taste,
but aside from that the design is fine.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Symantec/Verisign DV certs issued with excessive validity period of 6 years

2012-05-01 Thread Nico Williams
The idea of using fresh certs (not necessarily short-lived) came up in
the TLS WG list in the context of the OCSP multi-stapling proposal.

So far the most important objection to fresh-lived certs was that it
exacerbates clock synchronization issues, but I'm willing to live with
that.  Short-lived certs would be much easier to implement than OCSP
multi-stapling.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning?

2012-04-27 Thread Nico Williams
On Fri, Apr 27, 2012 at 9:15 AM, ianG i...@iang.org wrote:
 Easy.  Take the hash, then publish it.  The data can be secret, the hash
 need not be.

That works for git.  In particular what's nice about it is that you
get copies of the hash stored all over.

A similar approach can work for Tahoe-LAFS.  If the clients remember
the Merkle hash tree roots, you don't need to do any AE nor anything
else to authenticate the data.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning?

2012-04-26 Thread Nico Williams
On Thu, Apr 26, 2012 at 4:04 AM, Darren J Moffat
darren.mof...@oracle.com wrote:
 On 04/26/12 04:52, Nico Williams wrote:
 You'd have to ask Darren, but IIRC the design he settled on allows for
 unkeyed integrity verification and repair.

 Yes it is.  That was a fundamental requirement of adding encryption to ZFS.
  We could not assume that the keys for all blocks in all datasets were
 available at all times.

 Yet we have to be able to do resilvering due to individual block repair
 (which is actually a copy on write operation) or hole disk
 addition/replacement at any time.

Right, and since blkptr_t's are stored in indirect blocks and dnodes
and so one, and since you want to be able to resilver without having
keys, that means that the blkptr_t's have to be in the clear, which
does leak some information, namely file and dataset size in blocks
(and block size).  Right?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] “On the limits of the use cases for authenticated encryption”

2012-04-25 Thread Nico Williams
I think Tahoe-LAFS is the exception to any rule that one should use
AE, and really, the very rare exception.  Not the only exception,
though this type of application might be the only exception we want.

A ZFS-like COW filesystem with Merkle hash trees should have
requirements similar to Tahoe's, specifically the ability to verify
and repair on-disk structures, including encrypted file data (and some
meta-data!) without being able to decrypt said data.  One way to do
this is to use AE and also hash the encrypted data, storing both, the
hashes and the AE tags in block pointers, but this is rather wasteful.
 Storing only hashes has other problems, but these could be addressed
by MACing the root hash of every encrypted stream and storing that
hash in an appropriate place (e.g., the pointer to the root node for
that stream, or else separately in the node pointing to that root
node).  You could argue that such a filesystem is the same sort of
application that Tahoe-LAFS is, even if it isn't networked (but
nowadays the storage can be networked, so effectively a ZFS belongs in
the same exact bucket as Tahoe).

But in traditional network protocols (TLS, SSHv2, ESP, ...) I have to
strain to think of reasons to not use AE when you want confidentiality
protection (encryption).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning? (was: “On the limits of the use cases for authenticated encryption”)

2012-04-25 Thread Nico Williams
You'd have to ask Darren, but IIRC the design he settled on allows for
unkeyed integrity verification and repair.  I too think that's a
critical feature to have even if having it were to mean leaking some
information, such as file length in blocks, and number of files, as I
look at this from an operations perspective.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning? (was: “On the limits of the use cases for authenticated encryption”)

2012-04-25 Thread Nico Williams
On Wed, Apr 25, 2012 at 10:27 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 04/25/2012 10:11 PM, Zooko Wilcox-O'Hearn wrote:
 2. the verifier-oriented way: you make a secure hash of the chunk, and
 make the resulting hash value known to the good guy(s) in an
 authenticated way.


 Is option 2 sort of just pushing the problem around?

 What's going on under the hood in the term in an authenticated way?

 How do you do authentication in an automated system without someone
 somewhere keeping something secret?

 Is authenticating the hash value fundamentally different from ensuring the
 integrity of a chunk of data?

You have two choices for providing AE and (2): a) MAC the root of each
file's (or directory's, or dataset's) Merkle hash tree, or b) store a
hash and a MAC, thereby forming a Merkle hash tree and a parallel
Merkle MAC tree.

In terms of additional storage and compute power (a) is clearly
superior.  I believe the security of (a) is adequate.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning? (was: “On the limits of the use cases for authenticated encryption”)

2012-04-25 Thread Nico Williams
Also,

On Wed, Apr 25, 2012 at 10:11 PM, Zooko Wilcox-O'Hearn zo...@zooko.com wrote:
 Hello Nico Williams. Nice to hear from you.

 Yes, when David-Sarah Hopwood and I (both Tahoe-LAFS hackers)
 participated on the zfs-crypto mailing list with you and others, I
 learned about a lot of similarities between Tahoe-LAFS and ZFS.

Yes, I remember that too.  It was fun and enlightening.

 But in traditional network protocols (TLS, SSHv2, ESP, ...) I have to strain 
 to think of reasons to not use AE when you want confidentiality protection 
 (encryption).

 Yes, I agree with you on that. And OTR ¹, CurveCP ², mosh ³, tcpcrypt
 ⁴, and ZRTP ⁵. All of these eight protocols we've just named have in
 common that there are only two parties, that only current data
 in-flight is protected, and that the protocol has already ensured
 (more or less -- haha) a shared secret key known to both of the users
 and not to any attackers.

Remember that in ZFS we also speak of end-to-end integrity protection,
except in ZFS there's a single end: the system that implements it, and
the attackers are presumed to be between that system and its storage
devices.  It's end-to-end because even though there's only one end,
that end is effectively communicating with itself [over untrusted
storage media].  The on-disk format is the equivalent of secure
transport protocol (with SAS, IB, .. being the equivalent of TCP/IP).
Of course, if you access said storage from multiple heads then there
will be multiple ends, but since only one can be writing at any given
time (and really, even reading)...

In ZFS w/ encryption there are no additional ends, and protection
against a local privileged agent is not in scope, but protection of
data at rest (on the storage devices making up the ZFS volumes) is in
scope.  Additional protection is available when and for as long as the
keys are not loaded on the system running ZFS.

I think the distinction between filesystems on the one hand and
communication protocols on the other is that in the first case we
always have snapshots of data that we can apply Merkle hash trees to,
and we always have *all* the data available and subject to use and
re-use in random access patterns at any given time, including years
later.  Whereas in the second case the data is ephemeral, consumed and
thrown away or otherwise transformed (outside the scope of the
transport protocol) as soon as possible -- there's no need to consider
an attack where a block earlier in the octet/message stream gets
modified after it's been received and consumed.  We could store files
as TLS streams using PSK and have those shared keys be the files'
keys, but that would be inefficient, particularly if you were to need
to write in any fashion other than strictly append-only.  This
distinction is what I believe drives us to design/apply completely
different cryptographic protocols to the two types of protocols.

 I don't question the usefulness of the Authenticated Encryption
 abstraction for protocols that fall into that category.

Right, me either.  I can't even imagine not using AE in that context,
whether by generic composition or -much better- via integrated AE
ciphers/cipher modes.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Predictive SSH alternative for vt sessions 'Mosh: An Interactive Remote Shell for Mobile Clients'

2012-04-16 Thread Nico Williams
On Wed, Apr 11, 2012 at 11:06 AM, Marsh Ray ma...@extendedsubset.com wrote:
 http://mosh.mit.edu/
 http://mosh.mit.edu/mosh-paper-draft.pdf

Very interesting.  It's basically a VNC/RDP-like protocol but for
terminal applications.

 Hat's off to anyone brave enough to consider a correct and supportable MitM
 on something as complex as the ANSI/vt UTF-8 terminal protocol.

The MITM would first have to break the crypto (or otherwise find an
MITM vuln in the authentication protocol).

 It occurred to me that if Mosh could allow the client to hide the
 inter-keystroke timing (and perhaps that of the response too) with minimal
 disruption, it could represent a great mitigation for the timing attack
 vulnerability presented by SSH's (effectively) packet-per-keystroke model.

I think mosh would need a setting for an amount of time to buffer
keystrokes for, because if the RTT is too small and mosh does not
impose a buffer time then the inter-keystroke timings will be exposed.
 Add in the heartbeat messages being timed on a small multiple of the
buffer time and I think we'd be doing a good job of hiding timing
information (or at least we'd be getting close to doing a good job of
it).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key escrow 2012

2012-03-30 Thread Nico Williams
On Fri, Mar 30, 2012 at 7:10 AM, StealthMonger
stealthmon...@nym.mixmin.net wrote:
 Adam Back a...@cypherspace.org writes:

 Not sure that we lost the crypto wars.  US companies export full strength
 crypto these days, and neither the US nor most other western counties have
 mandatory GAK.  Seems like a win to me :)

 Nope.  If we had won, crypto would be in widespread use today for
 email.  As it is, enough FUD and confusion was sown to avert that
 outcome.  Even on geek mailing lists such as this, signatures are
 rare.

We don't encrypt e-mail for other reasons, namely because key
management for e-mail is hard.  It's taken a long time for us to reach
consensus (have we?) on that and then work on things like DKIM (though
that still doesn't support encryption).

OTOH many people use OTR all the time, and many more might if it was
always implemented and enabled by default in all IM clients.

Also, we all use TLS, and this has very widespread application.  And
we regularly read about people stopped at the border and asked to
produce their passphrases for disk/filesystem encryption.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key escrow 2012

2012-03-25 Thread Nico Williams
On Sun, Mar 25, 2012 at 10:55 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 03/25/2012 11:45 AM, Benjamin Kreuter wrote:
 The US government still wants a

No, probably parts of it: the ones that don't have to think of the big
picture.  The U.S. government is not monolythic.  The NSA has shown a
number of times that they are interested in strong civilian
cryptography for reasons of... national security.  In a battle between
law enforcement and national security the latter has to win.

 system where encrypted communications can be arbitrarily decrypted,
 they just dress up the argument and avoid using dirty words like key
 escrow.

 Aside from the deep moral and constitutional problems it poses, does anyone
 think the US Govt could have that even from a practical perspective?

 * Some of the largest supercomputers in the world are botnets or are held by
 strategic competitor countries. This precludes the old key shortening trick.

 * The Sony PS3 and HDMI cases show just how hard it can be to keep a master
 key secure sometimes. Master keys could be quite well protected, but from a
 policy perspective it's still a gamble that something won't go wrong which
 compromises everyone's real security (cause a public scandal, expose
 industrial secrets, etc.).

Key escrow == gigantic SPOF.  Even if you split the escrow across
several agencies and don't use a single master key, it's still
concentrating systemic failure potential into too few points.  To
build a single point of catastrophic failure into one's economic
infrastructure is one of the biggest strategic blunders I can imagine
(obviously there's worse, such as simply surrendering when one clearly
has the upper hand, say).  Back in the early 90s this probably wasn't
as clear as it is today.

 * Am I correct in thinking that computing additional trapdoor functions to
 enable USG/TLA/LEA decryption is not free? Mobile devices are becoming the
 primary computing devices for many. People may be willing to pay XX% in
 taxes, but nobody wants to pay a decrease in performance and battery life to
 enable such a misfeature.

Most users already pay heavy battery/performance taxes in the form of
uninstallable adware built into their devices.  The vendors might be
the ones to object then since they might have to stop shipping such
software.  But ultimately this argument depends on how heavy a burden
the users end up feeling.

For my money the winning argument is the strategic idiocy/insanity of
unnecessary SPOFs.  Who wants to ever even think of saying to the
POTUS Mr. President, we have a mole, they've stolen the codes for our
civilian networks and they've shut them down from the people's shear
fear of financial and other losses. It will take months to re-key
everything and in the meantime we'll lose X% of GDP. The stock and
bond markets have crashed.  As time passes X will tend to increase in
the event of such a catastrophe.  The higher that percentage the more
crippling the attack, with derivatives losses becoming overwhelming at
small values of X.  It could get worse: Mr. President, we can't even
re-key without changing all these hardware dongles that are
manufactured by the enemy, who's now not selling them to us.

If the point of key escrow is to make law enforcement easier then
there are much simpler non-cryptographic solutions -- not ones to your
taste or mine perhaps, but certainly ones that don't involve strategic
SPOFs.

I'm with you: key escrow is necessarily dead letter, at least for the
time being and the foreseeable future.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Constitutional Showdown Voided as Feds Decrypt Laptop

2012-03-01 Thread Nico Williams
IOW, I doubt mailman is how they got Fricosu's password.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Nico Williams
On Mon, Feb 20, 2012 at 7:07 AM, Ben Laurie b...@links.org wrote:
 In FreeBSD random (and hence urandom) blocks at startup, but never again.

So, not exactly a terribly wrong thing to do, eh?  ;)

What OSes have parallelized rc script/whatever nowadays?  Quite a few,
it seems (several Linux distros, MacOS X, Solaris, maybe some BSDs?

It seems to me that it should be quite safe to arrange for either a)
services that depend on /dev/urandom to not start until after [that
is, to depend on a service that does] proper seeding of it, or b)
/dev/urandom to block, but only early in boot, until properly seeded.
This is precisely why looking after the whole system is important; a
holistic view of the system will lead the developers to ensure that
there is enough entropy before any services (or user programs) run
that might need it.  And since user programs are outside the control
of the init process, it seems that (b) is the safer approach.

 One thing I'd really like to know is whether it would have ever
 unblocked on these devices - and if it does, whether it ends up with
 good entropy...

But devices like that really should have a) a factory seed (different
on each device, and obtained from a CSRNG), b) a clock and/or stable
storage for a counter so that it is possible to ensure distinct PRNG
state after each boot.  There are other cases where we may not be able
to rely on a factory seed, such as VMs and laptops.  (Well, at least
for pre-built VM images one could treat them like embedded devices and
embed a per-image seed...)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Homomorphic split-key encryption OR snake oil crypto

2012-02-19 Thread Nico Williams
On Sun, Feb 19, 2012 at 10:08 AM, Florian Weimer f...@deneb.enyo.de wrote:
 * Saqib Ali:

 Can somebody explain me how this so-called Homomorphic split-key
 encryption works?

 Isn't this just a protocal which performs a cryptographic primitive
 using split key material, without actually recombining the keys?
 (Traditional Shamir secret sharing needs a trust party for key
 recombination.)

The key part is the homomorphism.  ISTR this from a few years ago, and
I see wikipedia has an OK article on the subject:

http://en.wikipedia.org/wiki/Homomorphic_encryption#Fully_homomorphic_encryption

The idea is that you could even write an entire program this way,
which allows you to run it on untrusted systems without leaking the
program or data to those systems.  It seems unlikely to get deployed
anytime soon.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Homomorphic split-key encryption OR snake oil crypto

2012-02-19 Thread Nico Williams
My guess is that since fully homomorphic systems will be very slow
that one could use it to guard just a tiny secret.  But what's the
point?  Who cares if you can protect the customer's keys, if you can't
protect the customer's plaintext data?

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Applications should be the ones [GishPuppy]

2012-02-17 Thread Nico Williams
Note that there may be times when the application definitely should
initialize a PRNG (seeded from the OS' entropy system -- I still
maintain that the whole system needs to work well).  For example, when
using cipher modes where IVs/confounders need to be random but also
not re-used.  In that case then you want to be able to use a PRNG (one
instance per-session key) to guarantee non-reuse.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Nico Williams
On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it might
 be (unless you have an high speed entropy source, but what is high speed
 in this context?)

I'd like for /dev/urandom to block, but only early in boot.  Once
enough entropy has been gathered for it to start it should never
block.  One way to achieve this is to block boot progress early enough
in booting by reading from /dev/random, thus there'd be no need for
/dev/urandom to ever block.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Nico Williams
On Fri, Feb 17, 2012 at 2:51 PM, Jon Callas j...@callas.org wrote:
 On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:
 On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
 thierry.mor...@connotech.com wrote:
 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it might
 be (unless you have an high speed entropy source, but what is high speed
 in this context?)

 I'd like for /dev/urandom to block, but only early in boot.  Once
 enough entropy has been gathered for it to start it should never
 block.  One way to achieve this is to block boot progress early enough
 in booting by reading from /dev/random, thus there'd be no need for
 /dev/urandom to ever block.

 I can understand why you might want that, but that would be wrong with a 
 capital W. The whole *point* of /dev/urandom is that it doesn't block. If you 
 want blocking behavior, you should be calling /dev/random. The correct 
 solution is to have early-stage boot code call /dev/random if it wants 
 blocking behavior.

I was hoping you'd read the second sentence, where I basically say
that /dev/urandom shouldn't block, that the system should not progress
past where /dev/urandom is needed until /dev/urandom has enough
entropy.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Nico Williams
On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:
 Are you thinking this is because it causes the entropy estimate in the RNG 
 to be higher than it really is? Last time I checked OpenSSL it didn't block 
 requests for numbers in cases of low entropy estimates anyway, so line 3 
 wouldn't reduce security for that reason.

 I  am thinking this because in low entropy cases where multiple boxes 
 generate the same first prime adding that additional entropy before the 
 second prime is generated means they are likely to generate a different 
 second prime leading to the GCD attack.

I'd thought that you were going to say that so many devices sharing
the same key instead of one prime would be better on account of the
problem being more noticeable.  Otherwise I don't see the difference
between one low-entropy case and another -- both are catastrophic
failures.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Applications should be the ones [GishPuppy]

2012-02-16 Thread Nico Williams
On Thu, Feb 16, 2012 at 8:45 PM,  2...@gishpuppy.com wrote:
 Nico Williams wrote:

 Applications (in the Unix sense) should not be the ones seeding the
 system's PRNG.  The system should ensure that there is enough entropy
 and seed its own PRNG (and mix in new entropy).

 Exactly the opposite.

 Application creator/maintainer is always in the trust chain; this can not
 be avoided. As the well-known Debiandebacle demonstrated, there is
 every good reason to remove the operating system creator/maintainer
 from the trust chain. There is a reasonable chance that a security-critical
 application is constructed and maintained by someone who is skilled
 in security programming; there is very low chance this is the case with
 the operating system.

Debian was a distribution, and its maintainers had the responsibility
to put the system together correctly.  They're failure is no more an
indictment of all operating systems than any one application bug is an
indictment of all applications.  Just what is an operating system
anyways?  Just a kernel?  What about the boot loader?  Or the C
library, or the utilities needed to get to a running system?  Is
OpenSSL part of the system if it's shipped with the OS?

And if you can't trust the OS for entropy, why trust it to run the
application at all?  Who knows what side channels a poor OS might
result in.  Or a VM environment.

My rationale for the above statement is that an application by itself
has no entropy.  Entropy has to be gathered from something else.  The
application might be able to gather some, or not.  As a user it's hard
to say, but the operating system can definitely see to it that it has
some entropy, and the OS maintainers had better see to it that the OS
can and does gather entropy (/dev/*random is clearly evidence that OS
developers agree).  I can understand *portable* applications (and
libraries) having entropy gathering code on the argument that they may
need to run on operating systems that don't have a decent entropy
provider.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Nico Williams
On Wed, Feb 15, 2012 at 5:57 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 Alexander Klimov alser...@inbox.ru writes:

While the RSA may be easier to break if the entropy during the key
*generation* is low, the DSA is easier to break if the entropy during the key
*use* is low. Obviously, if you have access only to the public keys, the first
issue is more spectacular, but usually a key is used more often than 
generated.

 My thoughts exactly, I've always stayed away from DLP-based PKCs (except DH)
 because they're extraordinarily brittle, with RSA you have to get entropy use
 right just once, with DLP PKCs you have to get it right every single time you
 use them.  For embedded systems in particular that's just too risky.

Of course, if you're doing RSA key transport and the client selects
the key and has little or no entropy then the client still has a
problem (and the server may not know).

Most cryptographic protocols call for random keys, nonces,
confounders, IVs, and so on somewhere.  Typically the security of the
system depends to a large degree, if not entirely, on those random
items.

What can you do with RSA keys if you can't generate good entropy?  You
can sign.  What else?  You can encrypt  messages small enough that
there's no need to generate a symmetric key for encrypting the message
(or you can chunk the message and encrypt each chunk).  Oh, there is
one thing one can do with RSA keys but without good enough entropy:
one can *ask* a remote system for entropy (the remote system encrypts
some entropy in the client's RSA public key, then signs this in the
server's public key) -- much better than having no good entropy at
all.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-12 Thread Nico Williams
On Sun, Feb 12, 2012 at 9:13 PM, Krassimir Tzvetanov
mailli...@krassi.biz wrote:
 I agree, I'm just reflecting on the reality... :(

Reality is actually as I described, at least for some shops that I'm
familiar with.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-12 Thread Nico Williams
I'm sure the trend is currently the other way, yes, but with low-cost
high-bandwidth wireless becoming more common it doesn't really matter,
does it?

And it all depends on the organization and it's risk taking profile.

But to bring this back on topic: I'd rather see draconian corporate
network access rules than MITMing CAs.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Proving knowledge of a message with a given SHA-1 without disclosing it?

2012-02-01 Thread Nico Williams
On Wed, Feb 1, 2012 at 3:49 AM, Francois Grieu fgr...@gmail.com wrote:
 The talk does not give much details, and I failed to locate any article
 with a similar claim.
 I would find that result truly remarkable, and it is against my intuition.

The video you posted does help me with the intuition problem.  The
idea seems to be to replace the normal arithmetic in SHA-1 with
operations from a zero-knowledge scheme such that in the end you get a
zero-knowledge proof of the operations that were applied to the input.
 That makes complete sense to me, even without seeing the details.
But maybe I'm just gullible :^)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-28 Thread Nico Williams
On Sat, Jan 28, 2012 at 5:45 PM, Noon Silk noonsli...@gmail.com wrote:
 On Sun, Jan 29, 2012 at 4:22 AM, Nico Williams n...@cryptonector.com wrote:
 I don't see how I could have been much more specific given the two
 things you quoted from me.

 As I said, you could point to specific products that you have issues
 with, not QKD at large (a collection of potential protocols and
 implementations).

Any key exchange solution based on quantum mechanics is pointless
unless: a) it's somehow better than ECDH, b) does not weaken the
security of the whole system, c) it doesn't cost much more than ECDH.

(a) is critical.  And it's not enough to say that QKD is inherently
unbreakable in a way that hasn't been proven about some classical key
exchange protocol, because if all QKD does is exchange keys then you
still have to authenticate the exchanged keys and then use them, all
in classical crypto, so any inherent strength of QKD does not accrue
to the system as a whole.

Even supposing there was a complete all-quantum authentication +
integrity- and confidentiality-protected data transfers solution,
you'd still be limited to hop-by-hop security, and this is quite
limiting.  End-to-end security is preferable whenever one can have it.
 Even in multi-party protocols we generally do better than
link-by-link security.

Now suppose that P=NP (and that fast algorithms can be found for every
heretofore-thought-NP problem) and we suddenly really badly want
quantum crypto, and suppose we did have quantum authenticated link
encryption...  but we'd still need the thing to be practical, which
among other things means small and cheap enough to put on all the
devices where we need security (and that's quite a few devices).
Quantum tech will not be a perfect solution if P=NP, and it will be
impractical and/or uneconomic for a long time.  This makes just in
case [P=NP] arguments for QKD rather weak, IMO.

(b) started out as the subject of this thread.

 Let's turn it around: what QKD products do
 you think are not snake oil today?  Please be specific (list products
 currently on sale) and back up the assertion with a rationale,
 remembering that this is in comparison to classical cryptography
 technology.  Feel free to also point to literature about QKD
 technologies perhaps not yet on the market but which might change
 everything, and again, back up your assertions.

 Nice try, but I'm not the one making general claims about it. My
 original comment to you was, it's not sensible to say QKD is snake
 oil, without direct reference to something. I didn't say I want to
 argue about which products are or aren't (frankly, I don't know
 anywhere near enough about them or their implementations to comment on
 that).

I leave things here.   I believe reasonable people can educate
themselves about this and decide for themselves.  I do believe there's
not yet any economic point to any QKD technology currently on the
market, and I've explained why.  I've referred you to the archives as
well; I encourage you to go look.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-27 Thread Nico Williams
On Fri, Jan 27, 2012 at 3:49 PM, Sven Moritz Hallberg pe...@khjk.org wrote:
 On Fri, 27 Jan 2012 13:39:44 -0500, Warren Kumari war...@kumari.net wrote:
 Surely I am missing something here? Or is that really the news?

 I thought the same thing and skimmed (very incompletely) through the
 paper. They do talk about how to hide the saved bits in later sessions
 of particular QKD protocols, so maybe there is something inherent there
 that would make such an attack, say, especially hard to detect in the
 QKD setting?

Well, if there were covert, deniable, quantum side-channels in QKD
that the vendor could exploit practically undetectably, then yes, QKD
would suddenly become not just snake oil but poisonous snake oil.
OTOH, if this is just a worry that QKD devices might be compromised
(whether purposefully by the vendor or unwittingly), then this is
nothing new, and QKD remains snake oil.  Quantum authentication that
scales (as opposed to requiring pair-wise physical exchange of
entangled particle pairs) would be a neat trick -perhaps applying
Needham-Schoeder?- but it'd still be a novelty/curiosity IMO.

The idea that QKD is in use by the military gives me pause, unless
it's either completely redundant and classical crypto is still used
(wasteful, yes, but that's a lesser concern), or the military using
QKD is an enemy of the cause of liberty (in which case never mind and
keep at it boys!).

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] CAPTCHA as a Security System?

2012-01-02 Thread Nico Williams
On Mon, Jan 2, 2012 at 4:25 PM, Randall  Webmail rv...@insightbb.com wrote:
 My neighborhood Wal*Mart has pretty much eliminated cashiers in favor of
 self-checkouts.

[...]
 Wal*Mart is not stupid.   They know full well that a certain percent of
 shoppers will indeed walk out with a certain amount of goods, every day.

Yes, but this is not the same situation as with Ticketmaster.  The
equivalent for Ticketmaster would be scalpers who go through the
captcha many times, by hand, *slowly*, and who adhere to per-person
purchase limits or who make minimal efforts to get on a bit past such
limits -- something Ticketmaster may be willing to tolerate.

To do much better than slow down the scalpers Ticketmaster would have
to either do a lot of work (with payments system providers' help) to
ensure that payments are not anonymous and that the there is one
person per ticket purchase for any one event, or else they'd have to
auction off the tickets so as to find the market price for them.  I'm
not sure as to the feasibility of the former, particularly when
Ticketmaster can probably get the law to help, but I'd prefer the
latter.  (Perhaps because I'm not going to bother camping out for
bracelets and I can probably afford free market rates for the events I
want to attend!)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] CAPTCHA as a Security System?

2012-01-02 Thread Nico Williams
On Mon, Jan 2, 2012 at 9:08 PM, John Levine jo...@iecc.com wrote:
   [...].  One of the advantages of having a working legal system is so
 that we can live reasonable lives with $20 locks in our doors, rather
 than all having to spend thousands to armor all the doors and windows,
 like they do in some other parts of the world.

Indeed!  I'm not sure that this translates so well to online security
though, where one must defend against attackers that the law can't
reach.  You make a good case that it does translate well to the
Ticketmaster case though.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-27 Thread Nico Williams
I'm assuming that at password change new password policy evaluation
time you have both, the old and new passwords, in which case you can
use Optimal String Alignment Distance for at least that pair of
passwords.  If you have only one password you can try a cookbook of
transformations that users might apply to their passwords, and then
there's professor Bellovin's Bloom filter suggestion.  If you have
only a history of password hashes and no actual passwords and you want
to determine similarity, well, you're fortunately out of luck.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   >