Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Ralph Holz
Hi,

 Yes, with the second operation offline and validating against the NSS
 root store. I don't have a MS one at the moment, it would be interesting
 (how do you extract that from Win? The EFF guys should know)
 
 You might look at https://www.eff.org/files/ssl-observatory-code-r1.tar_.bz2
 in the microsoft_CAs directory.

Yes, I found that, but it seemed to contain a snapshot of PEMs from the
time of the EFF crawl in 2010, so it might be outdated. I would like to
obtain a fresh copy. How did you go about it, did you compile it
manually or is there a software kind of way to extract it directly from
Windows, maybe polling MS?

Ralph

-- 
Dipl.-Inform. Ralph Holz
I8: Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] announcing Tahoe-LAFS v1.8.3, fixing a security issue

2011-09-14 Thread Zooko O'Whielacronx
announcing Tahoe-LAFS v1.8.3, fixing a security issue

Dear People of the cryptography@randombit.net mailing list:

We found a vulnerability in Tahoe-LAFS (all versions from v1.3.0 to v1.8.2
inclusive) that might allow an attacker to delete files. This vulnerability
does not enable anyone to read file contents without authorization
(confidentiality), nor to change the contents of a file (integrity). How
exploitable this vulnerability is depends upon some details of how you use
Tahoe-LAFS. If you upgrade your Tahoe-LAFS storage server to v1.8.3, this
fixes the vulnerability.

We've written detailed docs about the issue and how to manage it in
the Known Issues document:

http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/known_issues.rst

I am sorry that we introduced this bug into Tahoe-LAFS and allowed it
to go undetected until now. We aim for a high standard of security and
reliability in Tahoe-LAFS, and we're not satisfied until our users are
safe from threats to their data.

We've been working with the packagers who maintain packages of
Tahoe-LAFS in various operating systems, so if you get your Tahoe-LAFS
through your operating system there may already be a fixed version
available:

http://tahoe-lafs.org/trac/tahoe-lafs/wiki/OSPackages

Please contact us through the tahoe-dev mailing list if you have
further questions.

Regards,

Zooko Wilcox-O'Hearn

ANNOUNCING Tahoe, the Least-Authority File System, v1.8.3

The Tahoe-LAFS team announces the immediate availability of version 1.8.3 of
Tahoe-LAFS, an extremely reliable distributed storage system. Get it here:

http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/quickstart.rst

Tahoe-LAFS is the first distributed storage system to offer
provider-independent security — meaning that not even the
operators of your storage servers can read or alter your data
without your consent. Here is the one-page explanation of its
unique security and fault-tolerance properties:

http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html

The previous stable release of Tahoe-LAFS was v1.8.2, which was
released January 30, 2011 [1].

v1.8.3 is a stable bugfix release which fixes a security issue. See the file
[2] and known_issues.rst [3] file for details.


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you distribute your filesystem across
multiple servers, and even if some of the servers fail or are
taken over by an attacker, the entire filesystem continues to
work correctly, and continues to preserve your privacy and
security. You can easily share specific files and directories
with other people.

In addition to the core storage system itself, volunteers
have built other projects on top of Tahoe-LAFS and have
integrated Tahoe-LAFS with existing systems, including
Windows, JavaScript, iPhone, Android, Hadoop, Flume, Django,
Puppet, bzr, mercurial, perforce, duplicity, TiddlyWiki, and
more. See the Related Projects page on the wiki [4].

We believe that strong cryptography, Free and Open Source
Software, erasure coding, and principled engineering practices
make Tahoe-LAFS safer than RAID, removable drive, tape,
on-line backup or cloud storage.

This software is developed under test-driven development, and
there are no known bugs or security flaws which would
compromise confidentiality or data integrity under recommended
use. (For all important issues that we are currently aware of
please see the known_issues.rst file [3].)


COMPATIBILITY

This release is compatible with the version 1 series of
Tahoe-LAFS. Clients from this release can write files and
directories in the format used by clients of all versions back
to v1.0 (which was released March 25, 2008). Clients from this
release can read files and directories produced by clients of
all versions since v1.0. Servers from this release can serve
clients of all versions back to v1.0 and clients from this
release can use servers of all versions back to v1.0.

This is the fourteenth release in the version 1 series. This
series of Tahoe-LAFS will be actively supported and maintained
for the forseeable future, and future versions of Tahoe-LAFS
will retain the ability to read and write files compatible
with this series.


LICENCE

You may use this package under the GNU General Public License,
version 2 or, at your option, any later version. See the file
COPYING.GPL [5] for the terms of the GNU General Public
License, version 2.

You may use this package under the Transitive Grace Period
Public Licence, version 1 or, at your option, any later
version. (The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to
delay for up to twelve months after you redistribute a derived
work before releasing the source code of your derived work.)
See the file COPYING.TGPPL.html [6] for the terms of the
Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either
licence, at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X, 

Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Ralph Holz
Hi,

 Well, yes, but it is the Alexa Top 1 million list that is scanned. I can
 give you a few numbers for the Top 1K or so, too, but it does remain a
 relative popularity.
 
 How many of those sites ever advertise an HTTPS end-point though?
 Maybe users are extremely unlikely to ever see a link, etc. that
 points to their HTTPS endpoint.

Maybe, but I don't have any numbers on that. However, if someone wants
to do it: a simple way would be to download a site's start page and
check for HTTPs links in the HTML. Then go to that site, download the
cert and do the validity checks. Obviously, you're likely not in the top
1 million sites anymore then.

Actually, I think Ivan Ristic has done something similar for login forms:

http://blog.ivanristic.com/2011/05/a-study-of-what-really-breaks-ssl.html

Although his presentation doesn't give any numbers how often the
encountered certificates were valid (chain, host name) for the thus
protected login site.

Ralph

-- 
Dipl.-Inform. Ralph Holz
I8: Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Fwd: The Magic Inside Bunnie’s New NeTV « root labs rdist

2011-09-14 Thread David Koontz


http://rdist.root.org/2011/09/13/the-magic-inside-bunnies-new-netv/

A year ago, what was probably the most important Pastebin posting ever was 
released by an anonymous hacker. The HDCP master key gave the ability for 
anyone to derive the keys protecting the link between DVD players and TVs. 
There was no possibility of revocation. The only remaining question was, “who 
would be the first to deploy this key in an HDCP stripper?”

Last week, the HDCP master key was silently deployed, but surprisingly, not in 
a stripper or other circumvention device. Instead, it’s enabling a useful new 
system called the Chumby NeTV. It was created by Bunnie Huang, who is known for 
inventing the Chumby and hacking the Xbox. He’s driving down the cost of 
TV-connected hardware with a very innovative approach.

 ...


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Covergence as multiple concurrent, alternate PKIs; also, Convergence business models, privacy, and DNSSEC (not that long)

2011-09-14 Thread Nico Williams
I recently caught up with the rest of you and saw Moxie's Convergence
presentation [on youtube].  I truly hesitate to post here; there have
been so many long posts, that any additional ones are likely to result
in tl;dr.

I believe Convergence is... just another PKI, or set of PKIs, with
some twists, granted.

Imagine that Convergence notaries actually behave like CAs and mint
[short-lived] x.509 certs for servers, and that that's what clients
see when they talk to a notary(ies).  Then the notaries would behave
as trust anchors for very shallow (one-level) PKIs.  But the main
differences vis-a-vis the PKI (laugh) we have now are a) the certs
issued would be short-lived, b) the TLS servers and the notaries need
not have a business relationship, c) the notaries can each implement a
variety of notarization policies. All three of these differences add
much value: (a) makes revocation simpler, (b) prevents a conflict of
interest, and (c) provides clients with potentially useful choices.

Imagine furthermore that the client could ask a TLS server to send the
cert chains for their public keys to multiple trust anchors, naming
notaries as the desired trust anchors.  This costs TLS servers
nothing, for they can fetch the alternative certs
proactively/asynchronously.  This approach solves the Convergence
privacy issue completely.  OTOH, this approach might have an outsize
effect on Convergence's potential business models: if the TLS *severs*
fetch the alternate certs from the Convergence notaries, then some
notaries could pressure web sites to establish a business relationship
with them, thus nullifying benefit Convergence (b) mentioned above.

And if TLS servers have no business relationship with the notaries?
Who pays for the notaries?  I doubt users would...  But many notaries
might be able to operate at ridiculously low costs (I'm not sure; a
notary willing to notarize any site's pubkeys/certs will have to be
prepared to do a lot of signing, and caching, which won't be free).
And if they can't get revenues from client users nor servers... well,
at least they won't have to advertise :)  But seriously, who pays?

Finally, note that since Convergence is really not too different from
a PKI anyways (it could be built on PKIX technologies) why not do the
same for DNSSEC?  Why, yes, I do believe we could, since, after all,
DNSSEC is not unlike a PKI either!.  I see two ways to do this: 1)
make DNSSEC resolvers able to use many roots, not just one, and then
accept answers only when all (or most, or...) lookup paths agree on
the same result, but since this requires making changes to clients...
maybe a less fully-functional approach 2) based on fake DNSSEC root
servers that simply parrot (but, signed with a different pubkey, of
course) top-level results from other roots (notaries, the real ones,
whatever).

Now, back to the value of Convergence.  It all depends on what the
notaries' policies are.  But what possible policies might make a
notary that valuable?  A notary that merely certifies (heh) that a
server has the same pubkey today as yesterday doesn't add too much
value, for example.  A notary that certifies only a small set of
servers' keys, say, only U.S. banks', and thus can afford to do a bit
more checking, might add a lot more value, particularly if the sites
in question have a business relationship with that notary and the
notary has a very small world of customers

...This actually gets to something I want to elaborate on in a future
post: trade-offs involved in trusted third-party vs. authentication
methods with pair-wise credentials.  The gist: the larger the universe
of peers one can authenticate through a third party (and perhaps
transitive trust, thus many third parties), the less security the
system provides, whereas using pair-wise methods exclusively results
in an explosion of pair-wise credentials to keep track of.  If I'm
right about this (I'm not sure) then pursuing a set of federations
with small universe of servers with purposes affine to the federations
they are members of, may be the best compromise.

Just some thoughts...  Please accept my apologies for adding to the
amount of traffic on this list at this time.  Comments welcomed.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Warren Kumari

On Sep 13, 2011, at 7:14 PM, Ralph Holz wrote:

 Hi,
 
 HTTPS Everywhere makes users encounter this situation more than they
 otherwise might.
 
 A week or three ago, I got cert warnings - from gmail's page.  (Yes, I'm 
 using HTTPS Everywhere).
 
 When _that_ happens, please tell Google and EFF.  I'm sure both
 organizations would be fascinated.
 
 I would also be very interested to hear from where that happened, and if
 you can give us a traceroute...
 

Yes please! And (if possible) also capture the certificate...

I see this fairly often at hotels that implement a captive portal for sign-on / 
billing. If it wasn't from something like this, *please* see if you can hunt it 
down again...

W


 Ralph
 
 -- 
 Dipl.-Inform. Ralph Holz
 I8: Network Architectures and Services
 Technische Universität München
 http://www.net.in.tum.de/de/mitarbeiter/holz/
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Seth David Schoen
Arshad Noor writes:

 I'm not sure I understand why it would be helpful to know all (or any)
 intermediate CA ahead of time.  If you trust the self-signed Root CA,
 then, by definition, you've decided to trust everything that CA (and
 subordinate CA) issues, with the exception of revoked certificates.
 
 Can you please elaborate?  Thanks.

Of course, intermediate CAs are sometimes created for purely
operational reasons that may be quite prudent.  But delegating
root CA-like power to more distinct organizations creates risk.

Without external double-checks, the integrity of the CA system is as
strong as its weakest link, so every new CA is an additional
independent source of risk.  When CAs delegate to intermediates,
those intermediates can add new kinds of risk:

* they could be in different jurisdictions, so there's new risk that
  the legal systems in those jurisdictions could try to compel them
  to misissue*;

* they could be run by different people who could be persuaded to
  misissue in new ways;

* they could use different software or hardware or operating systems
  that could have different vulnerabilities;

* they could use different crypto primitives when issuing legitimate
  certificates that could have different vulnerabilities.

Whether or not the new CA does a worse job overall than the old CA, it
still creates new risk -- by CA proliferation!  (In fact, there are
already some cases showing that intermediate CAs _aren't_ always as
cautious or competent in practice as the roots that delegated to them.)

More fundamentally, as Peter Biddle points out, trust isn't
transitive.  Suppose we think that a particular CA is super-awesome
at verifying that someone owns a domain and issuing hard-to-forge
certificates attesting to this fact, while resisting compromises
and coercion.  That doesn't necessarily mean that it's also a good
judge of whether another organization is also a good CA.

Even giving the PKIX status quo the benefit of the doubt, the root
CA decisions are supposed to be made by neutral parties following a
careful process that includes input from professional auditors.  When
CAs get in the habit of delegating their power, that process is at
risk of being bypassed and in any case starts to happen much less
transparently.  There are plenty of cases in the real world where
someone is trusted with the power to take an action, but not
automatically trusted with the power to delegate that power to others
without external oversight.  And that makes sense, because trust isn't
transitive.


* see https://www.eff.org/files/countries-with-CAs.txt

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread dan

*not* nitpicking...

  ...as Peter Biddle points out, trust isn't transitive.

as an engineer, I feel compelled to add that security is not
composable, either (joining two secure systems does not necessarily
result in a secure composite)

*not* nitpicking.

--dan

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Marsh Ray

On 09/14/2011 09:34 PM, Arshad Noor wrote:

On 9/14/2011 2:52 PM, Seth David Schoen wrote:

Arshad Noor writes:


I'm not sure I understand why it would be helpful to know all (or any)
intermediate CA ahead of time. If you trust the self-signed Root CA,
then, by definition, you've decided to trust everything that CA (and
subordinate CA) issues, with the exception of revoked certificates.


You keep using this word, I do not think it means what you think it means.

'Trust' does not mean everything the trusted party does is somehow put 
beyond all questioning by definition.



Technically - and legally (if the Certificate Policy and contracts
were written up properly) - when a self-signed Root CA issues a
Subordinate CA cert, they are delegating the issuance of certificates
to the Subordinate CA operator, to be issued ONLY in accordance
with a CP that both parties have agreed to. The SubCA cannot,
legally, exceed the bounds of the self-signed Root CA's CP in any
manner that introduces more risk to the Relying Party. These are
legal obligations placed on the operator of the SubCA.


Yes, and this system sucks. It is a complete joke.

It is of no doubt great consolation to the Dutch and Iranians to know 
that there is a contract somewhere being breached among Comodo and their 
resellers and DigiNotar and some software vendors.


Are the RPs even a party to that contract?


Can a SubCA operator violate the legal terms from a technical point
of view? Of course; people break the law all the time in business,
it appears.


A loose web of computer law contracts among hundreds of international 
business and government entities is not a foundation on which to build a 
strong system for data security. Just the fact that they allow this 
unrestricted delegation of authority (in the form of sub-CAs) means that 
they're even crappy contracts to begin with.



However, an RP must assess this risk before trusting a self-signed
Root CA's certificate. If you believe there is uncertainty, then
don't trust the Root CA.


Yes, that's what this conversation has been about. Finding ways to 
reduce this ridiculous hyperinflation of trust going around in general, 
and specific parts of it quickly in emergencies.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography