Bringing Tahoe ideas to HTTP

2009-08-31 Thread Brian Warner

[sent once to tahoe-dev, now copying to cryptography too, sorry for the

At lunch yesterday, Nathan mentioned that he is interested in seeing how
Tahoe's ideas and techniques could trickle outwards and influence the
design of other security systems. And I was complaining about how the
Firefox upgrade process doesn't provide the integrity checks that I want
(it turns out they rely upon the CA infrastructure and SSL alone, no
end-to-end checking; the updates and releases are GPG-signed, but
firefox doesn't check that, only humans might). And PyPI has this nice
habit of appending #md5=XYZ.. to the URLs of the release tarballs that
they publish, which is (I think) automatically used by tools like
easy_install to guard against corrupted downloads (and which I always
use, as a human, to do the same). And Nathan mentioned a class of web
attacks in which a page, loaded over SSL, imports something (JS, CSS,
JPG) via a regular http: URL, and becomes vulnerable to third-parties
who can take over the page by controlling what arrives over
unauthenticated HTTP.

So, setting aside the reliability-via-distributedness properties for a
moment, what could we bring from Tahoe into regular HTTP and regular
webservers that could improve the state of security on the web?

== Integrity ==

To start with integrity-checking, we could imagine a firefox plugin that
validated a PyPI-style #md5= annotation on everything it loads. The rule
would be that no action would be taken on the downloaded content until
the hash was verified, and that a hash failure would be treated like a
404. Or maybe a slightly different error code, to indicate that the
correct resource is unavailable and that it's a server-side problem, but
it's because you got the wrong version of the document, rather than the
document being missing altogether.

This would work just fine for a flat hash: the original file remains
untouched, only the referencing URLs change to get the new hash
annotation. Non-enhanced browsers are unaffected: the #-prefixed
fragment identifier is never sent to the server, and the a name= tag
is fairly rare these days (and would still mostly work). Container files
(the HTML which references the hashed documents) could be updated to
benefit at leisure. Automation (see below) could be used to update the
URLs in the containers whenever the referenced objects were modified.

To improve alacrity on larger files, Tahoe uses a Merkle tree over
segments of the file. This tree has to be stored somewhere (Tahoe stores
it along with the shares, but it would be more convenient for a web site
to not modify the source files). We could use an annotation like
#hashtree=ROOTXYZ;http://otherplace; to reference an external hash tree
(with root hash XYZ). The plugin would start pulling from the source
file and the hash tree at the same time, and not deliver any source data
until it had been validated. The hashtree object would need to start
with the segment size and filesize, so the tree could be computed
properly. For very large files, you could read those parameters and then
pull down (via a Range: header) just the parts of the Merkle tree that
were necessary. In this case, the automation would need to create the
hash tree file and put it in a known place each time the source file
changes, and then updated the references.

(note that ROOTXYZ provides the identification properties of this
annotation, and http://otherplace; provides the location properties,
where identification means the ability to recognize the correct document
if someone gives it to you, and location means the ability to retrieve a
possibly-correct document. URIs provide identification, URLs are
supposed to provide both.)

We could compress this by establishing an (overriable) convention that always has a hashtree at, resulting in a URL that looked like;. If you needed to store it
elsewhere, you could use #hashtree=ROOTXYZ;WHERE, and define WHERE to
be a relative URL (with a default value of NAME.hashtree).

== Mutable Integrity ==

Zooko and I have both run HTML presentations out of a Tahoe grid (which
makes for a great demo), and the first thing you learn there is that
immutability, while a great property in some cases, is a hassle for
authoring. You need mutability somewhere, and the more places you have
it, the fewer URLs you have to update every time you change something.
In technical terms, you frequently want to cut down the diameter of the
immutable domains of the object DAG, by splitting those domains with
mutable boundary nodes. In practical terms, it means you might want to
publish *everything* via a mutable file. At the very least, if your web
site has any internal cycles in it, you'll need a mutable node to break
the cycle.

Again, this requires data beyond the contents of the source file. We
could use a #sigkey=XYZ annotation with a base62'ed ECDSA pubkey (this

Re: [tahoe-dev] a crypto puzzle about digital signatures and future compatibility

2009-08-31 Thread James A. Donald

Zooko Wilcox-O'Hearn wrote:

On Wednesday,2009-08-26, at 19:49 , Brian Warner wrote:

Attack B is where Alice uploads a file, Bob gets the filecap and  
downloads it, Carol gets the same filecap and downloads it, and  
Carol desires to see the same file that Bob saw. ... The attackers  
(who may be Alice and/or other parties) get to craft the filecap  
and the shares however they like. The attackers win if Bob and  
Carol accept different documents.

Right, and if we add algorithm agility then this attack is possible  
even if both SHA-2 and SHA-3 are perfectly secure!

Consider this variation of the scenario: Alice generates a filecap  
and gives it to Bob.  Bob uses it to fetch a file, reads the file and  
sends the filecap to Carol along with a note saying that he approves  
this file.  Carol uses the filecap to fetch the file.  The Bob-and- 
Carol team loses if she gets a different file than the one he got.

If Bob and Carol want to be sure they are seeing the same file, have to
use a capability to an immutable file.

Obviously a capability to an immutable file has to commit the file to a 
particular hash algorithm.

(Using capability in the sense of capabilities as cryptographic data, 
capabilities as sparse addresses in a large address space identifying 
communication channels)

So the leading bits of the capability have to be an algorithm 
identifier.  If Bob's tool does not recognize the algorithm, it fails, 
and he has to upgrade to a tool that recognizes more algorithms.

If the protocol allows multiple hash types, then the hash has to start 
with a number that identifies the algorithm.  Yet we want that number to 
comprise of very, very few bits.

This is almost precisely the example problem I discuss in

Now suppose an older algorithm is broken, and Alice wants to show Bob 
one file, and Carol another, and pretend they are the same file.

So she just uses the older algorithm.  Bob and Carol, however, have the 
newer tool, which if the older algorithm is thoroughly broken, will 
probably pop up deprecated algorithm, which Bob and Carol will 
cheerfully click through.

If however, the older algorithm has been broken a good long time, and we 
are well past the transition period, and no one should be using the 
older algorithm any more, except to wrap old format immutable files 
inside new format immutable files, then Bob's tool will fail.  Problem 

Yes, during the transition period, people can be hosed, especially if 
they see nag messages so often that they click them off, as they 
probably will, but that is true no matter what.  If an algorithm gets 
broken, people can be hurt during the transition.  The point, however, 
is to have a smooth transition, even if security sucks during the 

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: [tahoe-dev] Bringing Tahoe ideas to HTTP

2009-08-31 Thread Michael Walsh
Hi Brian, all;

I'm all for including merkle trees with HTTP GETs, two items that
spring to mind:
 - Appending the location of the hash as you suggest in
#hashtree=ROOTXYZ;http://otherplace which requires no changes to the
 - Adding a HTTP header with this data but requires something like a
server module or output script. It also doesn't ugly up the URL (but
then again, we have url shortner services for manual typing).

One merkle hash tree over HTTP that interested me was the Tiger Tree
Hash Exchange/THEX [1] that's already in use in some P2P systems, and
would be interesting reading for other hash tree over HTTP systems.

Google Wave appears to use hash trees also, but it seems to be
under-speced [2]. I guess once that becomes fleshed out there would be
more content systems outputting data along with tree hashes.

I do like straightforwardness of using the
file.ext#hashtree=root;location or file.ext.sig conventions and an
added benefit is the .sig request can be HTTP/1.1 pipelined rather
than parsing the returned headers before sending on the additional

 I've no idea how hard it would be to write this sort of plugin. But I'm
 pretty sure it's feasible, as would be the site-building tools. If
 firefox had this built-in, and web authors used it, what sorts of
 vulnerabilities would go away? What sorts of new applications could we
 build that would take advantage of this kind of security?

My thoughts purely turn to verifying files and all webpage resources
integrity in a transparent and backward compatible way. Who has not
encountered unstable connections where images get corrupted and css
files don't fully load? Solving that problem would make me very happy!


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: [tahoe-dev] Bringing Tahoe ideas to HTTP

2009-08-31 Thread Brian Warner
Michael Walsh wrote:

  - Adding a HTTP header with this data but requires something like a
 server module or output script. It also doesn't ugly up the URL (but
 then again, we have url shortner services for manual typing).

Ah, but see, that loses the security. If the URL doesn't contain the
root hash, then you're depending upon somebody else for your
authentication, and then it's not end-to-end anymore. URL shortener
services are great for the location properties, but lousy for the
identification properties. Not only are you relying upon your DNS, and
your network, and every other network between you and the server, and
the server who's providing you with the data.. now you're also dependent
upon the operator of the shortener service, and their database, and
anyone who's managed to break into their database, etc.

I guess there are three new things to add:

 * secure identifier in the URL
 * an upstream request, to say what additional integrity information you
 * a downstream response, to provide that additional integrity

The stuff I proposed used extra HTTP requests for those last two.

But, if you use the HTTP request headers to ask for the extra integrity
metadata, you could use the HTTP response headers to convey it. The only
place where this would make sense would be to fetch the merkle tree, or
the signature. (if the file was small and you only check the flat hash,
then there's nothing else to fetch; if the file is encrypted, then you
can define its data layout to be whatever you like, and just include the
integrity information in it directly).

Oh, and that would make the partial-range request a lot simpler: the
client does a GET with a Range: 123-456 header, and the server looks
at the associated merkle tree, figures out which chain they'll need to
validate those bytes, and returns a header that includes all of those
hash tree nodes (and the segment size, filesize). And returns enough
file data to cover those segments (i.e. the Content-Range: response
would be for a larger range than the Range: request, basically rounded
up to a segment size). The client would hash the segments it receives,
build and verify the merkle chain, then compare the root of the chain
against the roothash in the URL and make sure they match.

The response headers might get a bit large:
log2(filesize/segsize)*hashsize . And you have to fetch at least a full
segsize. But that's predictable and fairly well bounded, so maybe it
isn't that big of a deal.

Doing this with HTTP headers instead of a separate GET would avoid a
roundtrip, since the server (which now takes an active role in the
process) can decide these things (like which hash tree nodes are needed)
on behalf of the client. Instead of the client pulling one file for the
data, then pulling part of another file to get the segment size (so it
can figure out which segments it wants), then pulling a different part
of that file to get the hash nodes... the client just does a GET Range:,
and the server figures out the rest.

As you said, it requires a server module. But compared to a client-side
plugin, that's positively easy :). It'd probably be a good idea to think
about a scheme that would take advantage of a server which had
additional capabilities like this. Maybe the client could try the header
thing first, and if the response didn't indicate that the server is able
to provide that data, go and fetch the .hashtree file.

 My thoughts purely turn to verifying files and all webpage resources
 integrity in a transparent and backward compatible way. Who has not
 encountered unstable connections where images get corrupted and css
 files don't fully load? Solving that problem would make me very happy!

Yeah. We've been having an interesting thread on tahoe-dev recently
about the backwards-compatible question, what sorts of failure modes are
best to aim for when you're faced with an old server or whatever.

You can't make this stuff totally transparent and backwards compatible
(in the sense that existing webpages and users should start benefiting
from it without doing any work).. I think that's part of the brokenness
of the SSL+CA model, where they wanted the only change to be starting
from https instead of http. But you can certainly create a framework
that lets people get better control over what they're loading. Now, if
only distributed programs could be written in languages with those sorts
of properties...


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Practical attack on WPA?

2009-08-31 Thread Jerry Leichter

A Practical Message Falsification Attack on WPA
Toshihiro Ohigashi and Masakatu Morii

Abstract. In 2008, Beck and Tews have proposed a practical attack on
WPA. Their attack (called the Beck-Tews attack) can recover plaintext
from an encrypted short packet, and can falsify it. The execution time
of the Beck-Tews attack is about 12-15 minutes. However, the attack
has the limitation, namely, the targets are only WPA implementations
those support IEEE802.11e QoS features. In this paper, we propose a
practical message falsification attack on any WPA implementation. In
order to ease targets of limitation of wireless LAN products, we apply
the Beck-Tews attack to the man-in-the-middle attack. In the man-in-
the-middle attack, the user’s communication is intercepted by an  
until the attack ends. It means that the users may detect our attack  
the execution time of the attack is large. Therefore, we give methods  
reducing the execution time of the attack. As a result, the execution  

of our attack becomes about one minute in the best case.

-- Jerry

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Defending Against Sensor-Sniffing Attacks on Mobile Phones

2009-08-31 Thread Jerry Leichter

Modern mobile phones possess three types of capabilities:
computing, communication, and sensing. While these capa-
bilities enable a variety of novel applications, they also raise
serious privacy concerns. We explore the vulnerability where
attackers snoop on users by sniffing on their mobile phone
sensors, such as the microphone, camera, and GPS receiver.
We show that current mobile phone platforms inadequately
protect their users from this threat. To provide better pri-
vacy for mobile phone users, we analyze desirable uses of
these sensors and discuss the properties of good privacy pro-
tection solutions. Then, we propose a general framework for
such solutions and discuss various possible approaches to
implement the framework’s components.

They have some suggestions, but feel the problems are deep and  
probably not completely solvable.

-- Jerry

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Fwd: Important Information for PGP® Desktop Use rs Running Mac OS X

2009-08-31 Thread R.A. Hettinga


So, we gotta pay for the upgrade in order to use PGP on Snow Leopard?



Begin forwarded message:

From: PGP Corporation
Date: August 28, 2009 6:18:09 PM GMT-04:00
Subject: Important Information for PGP® Desktop Users Running Mac OS X

Apple released its new version - Mac OS X 10.6 (Snow Leopard)   
today, Friday, August 28.


PGP Corporation does not recommend using PGP® Desktop with Mac OS X  
10.6 at this time, neither the 32 bit nor 64 bit versions of Snow  
Leopard are currently supported by PGP Corporation. This includes  
PGP® Whole Disk Encryption, PGP® Desktop Professional, PGP® Desktop  
Home and PGP® Desktop Email.

This email is to advise you that if you are running PGP® Whole Disk  
Encryption, PGP® Desktop Professional, PGP® Desktop Home or PGP®  
Desktop Email, you should NOT upgrade to Mac OS X 10.6 (Snow Leopard).

If you intend to upgrade to Snow Leopard, you must decrypt all PGP®  
encrypted drives and uninstall PGP® Desktop before upgrading the  
system to Mac OS X 10.6.

After upgrading your system you should not attempt to re-encrypt any  
disks with PGP® Whole Disk Encryption as it is likely to lead to  
potential data loss or other system and data issues.

We expect support for Mac OS X 10.6 to be available in the next  
major release of PGP® Desktop (10.0).  PGP Corporation recommends  
waiting until PGP® Desktop 10.0 is available before upgrading to Mac  
OS X 10.6.  If you would like to be notified when the beta version  
becomes available, please register at 

If you have questions about PGP® Desktop and Mac OS X 10.6, please  
visit our support site

PGP Corporation announced PGP WDE for Mac OS X last year - a native  
Mac application that was designed from the ground up for the Mac.   
PGP Corporation is committed to providing Macintosh users the best  
possible encryption solutions and we’ve been building them since re- 
starting the company in 2003.

The overall experience of PGP WDE for Snow Leopard will be the  
same.  You’ll notice PGP WDE for Mac OS X is controlled using PGP  
Desktop, which can be expanded to secure email and files as well.

Users of PGP WDE for Mac OS X will have a new pre-boot  
authentication screen that protects access to the machine before the  
operating system loads.   To see some of the work so far we have  
posted screen shots to the PGP Perspectives blog.

PGP® Worldwide Support Team

© 2002-2009 PGP Corporation, 200 Jefferson Dr. Menlo Park, CA 94025
All Rights Reserved.

Privacy Statement: | Legal Notices:
Contact Us:

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Fwd: [Macgpg-users] GPGMail Snow Leopard

2009-08-31 Thread R.A. Hettinga

...and now GPG.

So, Snow Leopard is crypto-less?

What? I shoulda said sans-crypto?

Begin forwarded message:

From: Benjamin Donnachie
Date: August 28, 2009 7:44:09 PM GMT-04:00

Subject: Re: [Macgpg-users] GPGMail  Snow Leopard

2009/8/28 Levi Brown

I'll ask the inevitable... Can we expect a new version of the plugin
which is compatible with Snow Leopard's

Do you mean GPGMail?  If so, then I'm afraid the answer is no -

Posted By: davelopper
Date: 2009-08-26 08:31
Summary: GPGMail  Snow Leopard (10.6)

Dear Users,

Current version of GPGMail (1.2.0) is NOT compatible with coming Snow
Leopard's Mail.
New Mail's internals changed a lot, like it did with all major
revisions of Mac OS X, and as Apple doesn't give any documentation nor
any support for such an unsupported plugin, developers have to work by
trials and errors. Based on my experience of previous compatibility
work, I can estimate that the workload to make GPGMail compatible with
Snow Leopard is at least 40 hours, for me.
Unfortunately I no longer have spare time to do that work (I've always
been working on GPGMail during my spare time, not during my work
Some people proposed their help, but at this time no one has been able
to find enough time to actually do the work. For unexperienced people
it will take much more time to do it.
In consequence, there will not be an update of GPGMail for Snow
Leopard in the coming months, not even a beta version.
If serious people want to do the work, I will gladly try to help them
as much as I can; just contact me.

I'm sorry to leave you without GPGMail on Snow Leopard; until someone
does the port, you'll have to rely on Thunderbird and its Enigmail

Any queries should probably be directed to the GPGMail list -


Macgpg-users mailing list

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Source for Skype Trojan released

2009-08-31 Thread Jerry Leichter
It can “...intercept all audio data coming and going to the Skype  

Proof of concept, but polished versions will surely follow.

-- Jerry

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to