Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread Ben Laurie
On Tue, Jan 8, 2013 at 1:28 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Ben Laurie b...@links.org writes:

 I've snipped most of this because, although it'd be fun to keep going back and
 forth, I'm not sure if everyone else wants to keep reading the exchange (Ben,
 we'll continue it over lunch or dinner some time :-).

Absolutely.

  There is one point
 though that really sticks out:

   Phishing is not something that PKI is intended to address.

 I don't think I've ever heard anyone admit that before.  In particular if you
 look at sites that talk about SSL's PKI, you see statements like:

   In addition to encryption, a proper SSL certificate also provides
   authentication. This means you can be sure that you are sending information
   to the right server and not to a criminal.s server.
 -- 
 http://www.sslshopper.com/why-ssl-the-purpose-of-using-ssl-certificates.html

Modulo CAs not working correctly, this is what SSL does. So long as
you define the right server as being the one with the domain name
you navigated to.


   Why SSL protects from phishing
   --
   [...]
 -- 
 http://www.sslshopper.com/why-ssl-the-purpose-of-using-ssl-certificates.html

Well, this cuts to some of the core of the problem: This means that
your users will be far less likely to fall for a phishing attack
because they will be looking for the trust indicators in their
browser, such as a green address bar, and they won’t see it.

As we know, users don't act on trust indicators in general. And if
they did, I'm not so sure phishers wouldn't find a way to get the
green address bar.

 (that was just the first thing that popped up from a quick Google).  So that
 leads to two possibilities:

 1. If browser PKI is meant to deal with phishing, and quite obviously doesn't,
 then it's defective and needs to be replaced with alternative mechanisms.

 2. If browser PKI isn't meant to deal with phishing then WTF are browser
 vendors persisting with it and not applying other measures that do actually
 work?

I would claim that Google is doing exactly that (i.e. applying other measures).

I don't doubt the effectiveness of the kind of thing you are talking about,
but what I would find helpful is something actionable - i.e. if you did X,
then users would actually better protected, and it won't break the 'net.

 That's pretty much what the longer reference I mentioned contains, there's
 something like two to three solid pages of references to research papers and
 (admittedly less rigorous) discussions with technical guys from vendors who do
 internet malware scanning to protect users from harm.

And this is an example of something Google is doing.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread Ben Laurie
On Tue, Jan 8, 2013 at 8:40 AM, ianG i...@iang.org wrote:

 IMO, the answer to phishing is to solve the password problem, and the
 solution to the password problem is really good password managers. But
 I haven't had much luck selling that solution. Probably because,
 rather like Peter's solution, it has a largish element of fluff.



 Nod.  Actually, using client certs gets you most of the way there [0]. But
 like passwords, we need to replace the bad password manager (aka the human)
 with a better password manager, in software.  So the solution is the same.

Quite so. What I didn't bother to expand on, but its clearly the end
game, is once you have a really good password manager then it can
manage other secrets, such as private keys, and since we've cut the
human out of the interaction part of signing in, they will be just as
usable as passwords. But with clearly superior security properties.

 [0] Point being that if one does the analysis, client certs dominate
 passwords at many levels.  Especially when we've got away from insisting
 that a password be memorable, something I'm sure everyone here understands.

 So why aren't client certs the focus of more attention?  Well, I will leave
 a conjecture on the table:  because the CAs have a lot of trouble selling
 them, and the vendor teams work closely with CAs and other infrastructure
 sellers of PKI software.  So, the vendor teams see no demand.

I will readily agree that this is why CAs aren't doing research on
client certs, but they're hardly the only actors in this world. My
experience is that client certs do not get focus because they have a
horrible UI, because they shift the user experience from the website
to the browser and because there's no good story for portability (i.e.
moving them between devices). There are also secondary issues, like
privacy concerns.

I guess I should mention another thing Google is doing at this point:
http://www.browserauth.net/.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread James A. Donald

On 2013-01-08 7:26 PM, Ben Laurie wrote:

Modulo CAs not working correctly, this is what SSL does. So long as
you define the right server as being the one with the domain name
you navigated to.


Domain names are lengthy and not all that human memorable.I logon to 
citicard, the correct domain name is accountsonline.com. Am I likely to 
notice if the domain name is accountsonlin.jim.com?


Indeed, in that the correct domain name is not citicard, am I likely to 
notice if the domain name Istealyourmoney.ru


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DNSSEC/DANE in CT. Was: Why anon-DH is less damaging than current browser PKI

2013-01-08 Thread Guido Witmond

On 01/07/2013 08:08 PM, Ben Laurie wrote:

On Mon, Jan 7, 2013 at 5:32 PM, Guido Witmondgu...@wtmnd.nl  wrote:

What I read from the certificate-transparency.org website is that it intends
to limit to Global CA certificates. I would urge mr Laurie and Google to
include all certificates, including self-signed. It would increase the value
of CT for me, especially in combination with DNSSEC/DANE

The problem with self-signed for CT is twofold:

1. spam.

2. revocation.
Given a solution to these I would happily include them in CT.
CT + DNSSEC/DANE + self-signed is a different matter, but one that
should probably address DNSSEC directly - that is, transparency for
DNSSEC keys, not for TLS certs mentioned in DANE records.


I don't know enough how self signed server certificates would add to the 
spam or revocation problem.


Please let me first phrase what I think I understand of CT and why I 
want to include self signed certificates.


If I understand correctly:
1. CT is a way to keep/make global CAs honest by listing all 
certificates signed by them, indexed by domain name.
2. CT allows to lookup hashes without leaking to third parties what 
sites I browse to.


Both goals are direly needed. Thank you for pursuing it.


A global server certificate is nothing more than a binding from domain 
name to a public key. It is designed to prevent a DNS-attack against my 
resolver that lures me to an attacker. Secondly, it provides a key to 
secure the communication against sniffing and tampering.


With DNSSEC and DANE, I don't have that problem as my resolver can 
validate both the correct ip-address and the server-certificate. Even if 
it is a self-signed certificate. I don't need the global CAs anymore for 
that.


Now I don't want to _trust_ DNSSEC completely either. A registrar might 
get pressured to change the
ip-address and certificate for a site. In fact, DNSSEC and DANE would 
make that attack easier as there is only one party to pressure. For that 
you would need to log the self signed certificates, not (just) the 
dnssec-keys.


CT would allow me to view the history of a certificate for the domain 
name. Even if it was a self signed certificate. It would let my browser 
to make a more informed decision whether to trust a site as Peter 
Gutmann promotes.


Perhaps you might want to leave the unpublished self signed certificates 
out of the log, to pressure people to use either global CAs or DANE.



With regard, Guido Witmond.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread Ben Laurie
On Tue, Jan 8, 2013 at 11:42 AM, James A. Donald jam...@echeque.com wrote:
 On 2013-01-08 7:26 PM, Ben Laurie wrote:

 Modulo CAs not working correctly, this is what SSL does. So long as
 you define the right server as being the one with the domain name
 you navigated to.


 Domain names are lengthy and not all that human memorable.I logon to
 citicard, the correct domain name is accountsonline.com. Am I likely to
 notice if the domain name is accountsonlin.jim.com?

 Indeed, in that the correct domain name is not citicard, am I likely to
 notice if the domain name Istealyourmoney.ru

Quite so. This is why PKI does not solve phishing.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread Thierry Moreau

ianG wrote:

On 8/01/13 15:16 PM, Adam Back wrote:


[...] a story about how their bank is just totally 
hopeless.

[...]
So.  Totally hopeless.  A recipe for disaster.

Obviously we cannot fix this.  But what we can do is decide who is 
responsible, and decide how to make them carry that responsibility.


Hence the question.  Who is responsible for phishing?

Vendor?  CA?  User?  Bank?  SSL techies?



If it's about liability allocation, I'll leave others to comment.

If it's about what might be envisioned by each actor group, I have an 
observation about SSL techies. I guess I qualify as among the group, but 
my difficulty is to train other techies about the consequences of crypto 
scientific/academic results.


Two cases where SSL techies seem hopeless (to me) in applying academic 
results:


The MD5 brokenness got serious attention from the PKI community only 
when an actual collision was shown on a real certificate, no sooner 
(this particular work has little value as a scientific contribution 
besides its industrial impact). Even worse, the random certificate 
serial number short term patch has become best practice and PKI 
techies now come up with fantasies about its rationales (see discussion 
starting at 
http://www.ietf.org/mail-archive/web/pkix/current/msg32098.html ).


Dan Bernstein made (with help of colleagues) a demonstration that DNSSEC 
NSEC3 mechanism comes with an off-line dictionary attack vulnerability 
as a DNS zone walking countermeasure. DNSSEC techies just flamed the 
messenger (on other grounds), ignored the warning, and quietly left the 
vulnerability in oblivion. Professor Bernstein moved to other issues.


For the record, DNS zone walking is a DNS privacy threat introduced by 
plain DNSSEC (e.g. the attacker quickly discovers 
s12e920be.atm-network.example.com because atm-nework.example.com is 
DNSSEC-signed without NSEC3). The NSEC3 patch development delayed DNSSEC 
protocol completion by a few years. The prof Bernstein presentation came 
after the DNSSEC RFC's were done.


So, when trying to promote an IT security innovation (e.g. if phishing 
could be reduced by some scheme that would protect the banks against 
their own incompetence), the typical expert in the audience is subject 
to this kind of short sightedness about established practice.


So, I would envision any strategy to make academic results and IT 
security innovation more palatable to IT experts. This is how I feel 
responsible for the hopeless phishing minefield!


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Fwd: Last Call: draft-laurie-pki-sunlight-05.txt (Certificate Transparency) to Experimental RFC

2013-01-08 Thread Stephen Farrell

There's been a bit of discussion about CT on this list
in the last few days.

If you've comments on CT then they'd be timely now, since
we're running an IETF last call on the draft (ends on Jan
24) with a view to pushing it out as an experimental track
RFC.

Cheers,
S.

PS: If you want to comment but aren't sure how, mail me
offlist.


 Original Message 
Subject: Last Call: draft-laurie-pki-sunlight-05.txt (Certificate
Transparency) to Experimental RFC
Date: Thu, 20 Dec 2012 11:33:58 -0800
From: The IESG iesg-secret...@ietf.org
Reply-To: i...@ietf.org
To: IETF-Announce ietf-annou...@ietf.org


The IESG has received a request from an individual submitter to consider
the following document:
- 'Certificate Transparency'
  draft-laurie-pki-sunlight-05.txt as Experimental RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2013-01-24. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract


   The aim of Certificate Transparency is to have every public end-
   entity (for example, web servers) and intermediate TLS certificate
   issued by a known Certificate Authority recorded in one or more
   certificate logs.  In order to detect misissuance of certificates,
   all logs are publicly auditable.  In particular, domain owners or
   their agents will be able to monitor logs for certificates issued on
   their own domain.

   To protect clients from unlogged misissued certificates, each log
   signs all certificates it records, and clients can choose not to
   trust certificates that are not accompanied by an appropriate log
   signature.  For privacy and performance reasons log signatures are
   embedded in the TLS handshake via the TLS authorization extension, in
   a stapled OCSP extension, or in the certificate itself via an X.509v3
   certificate extension.

   To ensure a globally consistent view of any particular log, each log
   also provides a global signature over the entire log.  Any
   inconsistency of logs can be detected through cross-checks on the
   global signature.  Consistency between any pair of global signatures,
   corresponding to snapshots of a particular log at different times,
   can be efficiently shown.

   Logs are only expected to certify that they have seen a certificate,
   and thus we do not specify any revocation mechanism for log
   signatures in this document.  Logs are append-only, and log
   signatures do not expire.





The file can be obtained via
http://datatracker.ietf.org/doc/draft-laurie-pki-sunlight/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-laurie-pki-sunlight/ballot/




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: Last Call: draft-laurie-pki-sunlight-05.txt (Certificate Transparency) to Experimental RFC

2013-01-08 Thread CodesInChaos
You're using a classical merkle tree which has only two tags, one for
leaves, and one for inner nodes.
This doesn't have any practical weaknesses, but lowers second pre-image
resistance from 2^256 to (2^256)/n. For realistic
input sizes that's still above 2^200, so it doesn't really matter.
I believe a construction that uses unique tags for each node (for example
using a (depth, node-index) pair is a bit stronger,
since it avoids this issue.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Jeffrey Walton
On Tue, Jan 1, 2013 at 1:02 PM, Ben Laurie b...@links.org wrote:
 We're experimenting with moving openssl to git. Again.

 We've tried an import using cvs2git - does anyone have any views on
 better tools?

 You can see the results here (not all branches pushed to github yet,
 let me know if there's a particular branch you'd like me to add):
 https://github.com/benlaurie/openssl.

 Any comments?
Would you consider adding a hook to git (assuming it include the ability).

Have the hook replace tabs with white space. This is necessary because
different editors render tabs in different widths. So white space
makes thing consistent for everyone.

Then have the hook format the source code against some standard. I
don't care which, as long as its consistent.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Ben Laurie
On 8 January 2013 18:06, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, Jan 1, 2013 at 1:02 PM, Ben Laurie b...@links.org wrote:
 We're experimenting with moving openssl to git. Again.

 We've tried an import using cvs2git - does anyone have any views on
 better tools?

 You can see the results here (not all branches pushed to github yet,
 let me know if there's a particular branch you'd like me to add):
 https://github.com/benlaurie/openssl.

 Any comments?
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Then have the hook format the source code against some standard. I
 don't care which, as long as its consistent.

Funnily enough we've been discussing this.

The problem is that the vast majority of the code is formatted to look
OK with pure tab indentation. Which means that with tabs set to a
standard 8 columns it sucks (though that does appear to be what was
used in the original code base).

People are reluctant to change _all_ the code for the sake of
indentation and so the current favourite option is to adopt a code
style that works with different tab widths (specifically 4 and 8
column) and use no spaces. I realise that almost no-one works this
way, but it does seem like a sensible option in this case.

I would certainly like to have an agreed style that is consistent.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Jeffrey Walton
On Tue, Jan 8, 2013 at 1:21 PM, Ben Laurie b...@links.org wrote:
 On 8 January 2013 18:06, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, Jan 1, 2013 at 1:02 PM, Ben Laurie b...@links.org wrote:
 We're experimenting with moving openssl to git. Again.

 We've tried an import using cvs2git - does anyone have any views on
 better tools?

 You can see the results here (not all branches pushed to github yet,
 let me know if there's a particular branch you'd like me to add):
 https://github.com/benlaurie/openssl.

 Any comments?
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Then have the hook format the source code against some standard. I
 don't care which, as long as its consistent.

 Funnily enough we've been discussing this.

 The problem is that the vast majority of the code is formatted to look
 OK with pure tab indentation. Which means that with tabs set to a
 standard 8 columns it sucks (though that does appear to be what was
 used in the original code base).
I [respectfully] think the fallacy is standard 8 columns. Two
problems: (1) I use either 4 columns or 2 columns on occasion. (2) I'm
not aware of editors adhering to such standards (gedit vs emacs vs
visual Studio vs Xcode vs Notepad vs ). They all seem to get white
space correct, though.

 People are reluctant to change _all_ the code for the sake of
 indentation and so the current favourite option is to adopt a code
 style that works with different tab widths (specifically 4 and 8
 column) and use no spaces. I realise that almost no-one works this
 way, but it does seem like a sensible option in this case.
Again, spaces are known and every editor gets them right.

 I would certainly like to have an agreed style that is consistent.
Yes, I'm not making any recommendations. Let the Dev Team choose what
they want amongst themselves, and then do it consistently. The git
plug-in will ensure its applied consistently without prejudice.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Jeffrey Altman
On 1/8/2013 1:21 PM, Ben Laurie wrote:
 On 8 January 2013 18:06, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Then have the hook format the source code against some standard. I
 don't care which, as long as its consistent.

Git does support hooks but any hook that modifies the patchset must be
executed on the system that performs the commit to the local repository.
Otherwise the sha1 of the patch in the local repository does not match
the commit in the remote repository.

What you can do is apply a hook in the upstream repository that rejects
patches that fail various tests.

OpenAFS uses Gerrit as a patch review system.  One of the benefits of
Gerrit is that it highlights whitespace errors.  Our policy is that
patches submitted to Gerrit will fail review if the formatting is
incorrect.  We block patchsets ending up in the repository via that manner.

 Funnily enough we've been discussing this.
 
 The problem is that the vast majority of the code is formatted to look
 OK with pure tab indentation. Which means that with tabs set to a
 standard 8 columns it sucks (though that does appear to be what was
 used in the original code base).
 
 People are reluctant to change _all_ the code for the sake of
 indentation and so the current favourite option is to adopt a code
 style that works with different tab widths (specifically 4 and 8
 column) and use no spaces. I realise that almost no-one works this
 way, but it does seem like a sensible option in this case.
 
 I would certainly like to have an agreed style that is consistent.

OpenAFS has performed various code formatting cleanups over the years
across the entire code base.  The major downside of such efforts is that
git pickaxe or git blame should less meaningful output because the
entire tree has been altered.   However, openssl is in the process of
converting the repository from cvs to git.  One of the things that could
be done in the process is to reformat each patchset from cvs before
committing it to git.  This would ensure the history is maintained while
also ensuring format consistency across the tree.

Jeffrey Altman



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Archive Formats and Pinsets (Certificate Pinning)?

2013-01-08 Thread Trevor Perrin
On Tue, Jan 8, 2013 at 6:39 AM, Tom Ritter t...@ritter.vg wrote:

 On 6 January 2013 17:55, Jeffrey Walton noloa...@gmail.com wrote:
  H All,
 
  Does anyone know if there is a standard extension to store pin sets
  (re: certificate pinning) in, for example, PKCS #12?
 
  Perhaps in another format?
 
  OIDs?
 
  Placing a pinset in a PKCS #12 certificate  (or other format) kills
  two key distribution problems with one stone.


 I believe at one point TACK (tack.io) had a configuration where the
 pins could be specified in an X509 extension, but this seems to be
 missing



Yep - we were considering that, but decided against it:

TACK (for those not familiar) is a way for a server to assert that it
should be pinned, within the TLS handshake.

The current draft [1] specifies presenting TACK data within a TLS
extension, which means changing SSL libraries on the client and server side.

To avoid that deployment hurdle, we earlier had an option to allow TACK
data within an X.509 extension.  Two issues with that:

1) It would allow a CA to create certificates that assert a pin that only
the CA has control of; customers might deploy such a certificate without
noticing, and end up inadvertently locked-in to the CA.

2) For web servers to make use of TACK without a cooperating CA, they would
need to insert a superfluous certificate into the chain presented by the
TLS server, i.e. a certificate that is not part of the validated chain the
client will construct.

This violates the TLS RFCs, but almost all browsers ignore superfluous
certs.  However, Google's Certificate Transparency was considering the
superfluous cert idea as well, and did more experiments, and found some
incompatibilities on (as I recall) a couple older mobile devices.


So, to avoid these problems we just bit the bullet and went with a TLS
Extension.


Trevor

[1] http://tools.ietf.org/html/draft-perrin-tls-tack-02
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Archive Formats and Pinsets (Certificate Pinning)?

2013-01-08 Thread Jeffrey Walton
On Tue, Jan 8, 2013 at 1:44 PM, Trevor Perrin tr...@trevp.net wrote:

 On Tue, Jan 8, 2013 at 6:39 AM, Tom Ritter t...@ritter.vg wrote:

 On 6 January 2013 17:55, Jeffrey Walton noloa...@gmail.com wrote:
  H All,
 
  Does anyone know if there is a standard extension to store pin sets
  (re: certificate pinning) in, for example, PKCS #12?
 
  Perhaps in another format?
 
  OIDs?
 
  Placing a pinset in a PKCS #12 certificate  (or other format) kills
  two key distribution problems with one stone.


 I believe at one point TACK (tack.io) had a configuration where the
 pins could be specified in an X509 extension, but this seems to be
 missing

 Yep - we were considering that, but decided against it:

 TACK (for those not familiar) is a way for a server to assert that it
 should be pinned, within the TLS handshake.

 The current draft [1] specifies presenting TACK data within a TLS extension,
 which means changing SSL libraries on the client and server side.

 To avoid that deployment hurdle, we earlier had an option to allow TACK data
 within an X.509 extension.  Two issues with that:

 1) It would allow a CA to create certificates that assert a pin that only
 the CA has control of; customers might deploy such a certificate without
 noticing, and end up inadvertently locked-in to the CA.
Notwithstanding lock-in, the CA industry has proven itself to be
untrustworthy, so I think its a good choice. My apologies to the
innocent parties lumped in there (for completeness, Trustwave is not
one of them, despite Mozilla's back room dealings).

 2) For web servers to make use of TACK without a cooperating CA, they would
 need to insert a superfluous certificate into the chain presented by the
 TLS server, i.e. a certificate that is not part of the validated chain the
 client will construct.

 This violates the TLS RFCs, but almost all browsers ignore superfluous
 certs.  However, Google's Certificate Transparency was considering the
 superfluous cert idea as well, and did more experiments, and found some
 incompatibilities on (as I recall) a couple older mobile devices.
rather than TACK, why not add the extension directly, without the need
for a separate call to request the TACK (and the need for the place
holder). Covering everything under the one signature.

I can get a two year certificate for under $20 US. Those things are
throwaway. Plus, I would think ISPs would offer it as a value added
service (a new certificate, which includes the pin set).

*

Traditionally, a TLS client verifies a TLS server's public key using
a certificate chain issued by some public CA.

The world created smarter mices: http://bugs.python.org/issue1589 and
http://blog.spiderlabs.com/2011/07/twsl2011-007-ios-ssl-implementation-does-not-validate-certificate-chain.html.



Also, I noticed the draft lacked the word critical. The solution
would seem to beg for a critical extension that becomes critical after
a grace period. Or a CriticalAfter extension that is woefully
missing (IIRC).

I believe it solves the server-software-never-updated syndrome.
Clients with updated software will do the right thing when they
encounter the OID after a certain date, even if the server does not
mark the extension as critical.

Downlevel clients that never update (Windows Mobile, many Android, et
al) don't know what to do with the OID, so they can shit or go blind.
It does not really matter to me since the platform is defective (full
of exploitable security bugs). Users don't use it; and I don't allow
it in the Enterprise without a knuckle pounding fight.

Plus, waiting for the industry to do the right thing and provide
security related updates is hopeless in most cases, and I don't
believe it will be fixed until its mandated by law. Mandating updates
won't work unless the penalties are severe enough to upset the
risk/benefit equations. So we need untainted legislation that was not
influenced by special interest groups. That's an even more difficult
problem.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] So, PKI lets know who we're doing business with?

2013-01-08 Thread Thor Lancelot Simon
What do you do if even they don't know?  Today I tried to help someone
who was mid-transaction on Amex's cardholder web site, associating a
new card with their account, when the next step of their process hopped
us over to https://www203.americanexpress.com.

Which has an EV certificate from VeriSign that's been expired since
October last year.  Of course this is more likely due to error than
malfeasance, but nonetheless.  It's what it would look like, eventually,
if an attacker stole a private key just once, right?  So this isn't
something you want to go typing your financial secrets into.

Approximately an hour on the phone with American Express produced
approximately as much head-scratching among Amex employees as on my
end.  An expired certificate for a back-end server isn't among the
problems their online services help desk knows how to test for nor
can report.  Their fraud protection department refers all complaints
of web site misbehavior, even security-related, to their online services
help desk.  Their high-limit corporate card support team can create
tickets in their web development queue but evidently does not have
contact information for any relevant security department at American
Express.  The technical contacts for their domain don't answer the
phone.

In other words, even *they* don't know if the certificate in question
really vouches for them or not, and don't have any way to find out.

Can we really expect that end users will ever get that decision right?
Sure.  Sure we can.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] So, PKI lets know who we're doing business with?

2013-01-08 Thread Bernie Cosell
On 8 Jan 2013 at 17:06, Jeffrey Walton wrote:

  https://www203.americanexpress.com
 That's not too egregious (though its bad). What frustrates me is when
 they send you to a different domain for the authentication or a
 transaction. I won't add sites to the trusted base just because a web
 master thought it was a good idea.

Similar thing for me: One of my accounts is with StellarOne bank.  Main 
site is not HTTPS nor redirects.  But when you put in your account ID you 
get sent to netteller.com with a certificate owned by Jack Henry and 
Associates, Inc.  I called the bank to ask about that and they said that 
that's OK -- Jack Henry, whoever that is, handles their online banking 
machinery so it is a legit redirect.  But it sure was disconcerting...

  /Bernie\

-- 
Bernie Cosell Fantasy Farm Fibers
mailto:ber...@fantasyfarm.com Pearisburg, VA
--  Too many people, too few sheep  --   



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ECIES (Elliptic Curve Integrated Encryption System)

2013-01-08 Thread Jeffrey Walton
On Fri, Nov 30, 2012 at 1:14 AM, Jeffrey Walton noloa...@gmail.com wrote:

 I'm getting ready to move some code to elliptic curves, and I'd like
 to look at ECIES for an off the shelf solution. Is anyone aware of
 defects in Shoup's ECIES (Elliptic Curve Integrated Encryption
 System)?

 I know I'm going to need to change some parameters to meet security
 goals. For example, I will need AES-256 and HMAC-SHA256. But I don't
 believe it will introduce any negative interactions.
I found DHAES (DHIES) schemes were a bit better for both security and
efficiency. DHAES had weaker assumptions, and enjoyed a little more
efficiency due to a cofactor of 2 or 4.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

Hooks shouldn't modify the commit, just accept or reject.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Jeffrey Walton
On Tue, Jan 8, 2013 at 9:30 PM, Nico Williams n...@cryptonector.com wrote:
 On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Hooks shouldn't modify the commit, just accept or reject.
Thanks Nico.

Out of curiosity: what does one typically do when there's a standard
policy to enforce? I [personally] would not reject a check-in for
whitespace (I would reject for many other reasons, though - such as
CompSci 101 omissions).

Perhaps allow the check-in to proceed unmolested, and then have a
second process run after the commit to perform policy enforcement (for
example, whitespace or coding style). In this scenario, would the
second process perform a second commit?

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Eitan Adler
On 9 January 2013 00:08, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, Jan 8, 2013 at 9:30 PM, Nico Williams n...@cryptonector.com wrote:
 On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Hooks shouldn't modify the commit, just accept or reject.
 Thanks Nico.

 Out of curiosity: what does one typically do when there's a standard
 policy to enforce? I [personally] would not reject a check-in for
 whitespace (I would reject for many other reasons, though - such as
 CompSci 101 omissions).

The standard policy is usually enforced via post-commit code review 
lots of yelling
or via commit hooks which look for specific indicators and reject the
commit if they fail.

 Perhaps allow the check-in to proceed unmolested, and then have a
 second process run after the commit to perform policy enforcement (for
 example, whitespace or coding style). In this scenario, would the
 second process perform a second commit?

this can break git blame.
I prefer a manual tool make fixlint or the like which does the
whitespace fixes.

-- 
Eitan Adler
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
On Tue, Jan 8, 2013 at 11:08 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, Jan 8, 2013 at 9:30 PM, Nico Williams n...@cryptonector.com wrote:
 On Tue, Jan 8, 2013 at 12:06 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Would you consider adding a hook to git (assuming it include the ability).

 Have the hook replace tabs with white space. This is necessary because
 different editors render tabs in different widths. So white space
 makes thing consistent for everyone.

 Hooks shouldn't modify the commit, just accept or reject.
 Thanks Nico.

 Out of curiosity: what does one typically do when there's a standard
 policy to enforce? I [personally] would not reject a check-in for
 whitespace (I would reject for many other reasons, though - such as
 CompSci 101 omissions).

A number of projects I've worked on -particularly Solaris, but not
only- absolutely reject pushes of code (and docs, and tests, and build
goop) that fails style and other tests.  Some even go so far as to
trigger an incremental build to check that all is ok (but rarely is
this done synchronously, and so build failures - email, possible
backout, ...).

 Perhaps allow the check-in to proceed unmolested, and then have a
 second process run after the commit to perform policy enforcement (for
 example, whitespace or coding style). In this scenario, would the
 second process perform a second commit?

Fast checks should be done synchronously and failure - push rejection.

Slow checks should be done asynchronously and failure - nastygram.

Slow check failures can be corrected in either of two ways: backout
(mostly to be avoided, except when nearing releases or build
milestones) or subsequent push to fix the issues.

The more you can check quickly, the better:

 - *style (C style, Java style, JS style, ...)
 - referential integrity / software engineering process (commit
references bug report, bug report is in correct state, if the bug
report indicates that the fix should have docs impact then check that
docs are updated, check that codereview has happened and the code has
been signed off, or perhaps that code reviewers are listed, ...)

Slower checks:

 - build
 - static bug analysis (including, for languages that need it, *lint)
 - tests

Do all this, and your life will be easier.  Every hour you put into
writing checkers of this sort will pay for itself many times over for
any sufficiently large project.

Some such checkers can easily be found by searching around.  The
Solaris gate checks are, IIRC, in OpenSolaris and derivatives, for
example, but I've seen others.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-08 Thread Nico Williams
And, of course, *all* the gate checkers need to be available to the
developer, so *they* can run them first.  No trial and error please.

(One quickly learns to code in the target upstream's style and other
requirements.)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography