NSA domestic intelligence "vacuum"
WASHINGTON, D.C. -- Five years ago, Congress killed an experimental Pentagon antiterrorism program meant to vacuum up electronic data about people in the U.S. to search for suspicious patterns. Opponents called it too broad an intrusion on Americans' privacy, even after the Sept. 11 terrorist attacks. But the data-sifting effort didn't disappear. The National Security Agency, once confined to foreign surveillance, has been building essentially the same system. http://online.wsj.com/article/SB120511973377523845.html?mod=todays_us_page_one Hat tip: Bruce Schneier's blog. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
announcing allmydata.org "Tahoe", the Least-Authority Filesystem, v1.0
ANNOUNCING Allmydata.org "Tahoe", the Least-Authority Filesystem, v1.0 We are pleased to announce the release of version 1.0 of the "Tahoe" Least Authority Filesystem. The "Tahoe" Least Authority Filesystem is a secure, decentralized, fault-tolerant filesystem. All of the source code is available under a Free Software, Open Source licence (or two). This filesystem is encrypted and distributed over multiple peers in such a way it continues to function even when some of the peers are unavailable, malfunctioning, or malicious. A one-page explanation of the security and fault-tolerance properties that it offers is visible at: http://allmydata.org/source/tahoe/trunk/docs/about.html We believe that this version of Tahoe is stable enough to rely on as a permanent store of valuable data. The version 1 branch of Tahoe will be actively supported and maintained for the forseeable future, and future versions of Tahoe will retain the ability to read files and directories produced by Tahoe v1.0 for the forseeable future. This release of Tahoe will form the basis of the new consumer backup product from Allmydata, Inc. -- http://allmydata.com . This is the successor to Allmydata.org "Tahoe" Least Authority Filesystem v0.9, which was released March 13, 2008 [1]. Since v0.9 we've made the following changes: * Use an added secret for convergent encryption to better protect the confidentiality of immutable files, and remove the publically readable hash of the plaintext (ticket #365). * Add a "mkdir-p" feature to the WAPI (ticket #357). * Many updates to the Windows installer and Windows filesystem integration. Tahoe v1.0 produces files which can't be read by older versions of Tahoe, although files produced by Tahoe >= 0.8 can be read by Tahoe 1.0. The reason that older versions of Tahoe can't read files produced by Tahoe 1.0 is that those older versions require the file to come with a publically-readable hash of the plaintext, but exposing such a hash is a confidentiality leak, so Tahoe 1.0 does not do it. WHAT IS IT GOOD FOR? With Tahoe, you can distribute your filesystem across a set of computers, such that if some of the computers fail or turn out to be malicious, the filesystem continues to work from the remaining computers. You can also share your files with other users, using a strongly encrypted, capability-based access control scheme. Because this software is the product of less than a year and a half of active development, we do not categorically recommend it for the storage of data which is extremely confidential or precious. However, we believe that the combination of erasure coding, strong encryption, and careful engineering makes the use of this software a much safer alternative than common alternatives, such as RAID, or traditional backup onto a remote server, removable drive, or tape. This software comes with extensive unit tests [2], and there are no known security flaws which would compromise confidentiality or data integrity. (For all currently known security issues please see the Security web page: [3].) This release of Tahoe is suitable for the "friendnet" use case [4] -- it is easy to create a filesystem spread over the computers of you and your friends so that you can share files and disk space with one another. LICENCE You may use this package under the GNU General Public License, version 2 or, at your option, any later version. See the file "COPYING.GPL" for the terms of the GNU General Public License, version 2. You may use this package under the Transitive Grace Period Public Licence, version 1.0. The Transitive Grace Period Public Licence says that you may distribute proprietary derived works of Tahoe without releasing the source code of that derived work for up to twelve months, after which time you are obligated to release the source code of the derived work under the Transitive Grace Period Public Licence. See the file "COPYING.TGPPL.html" for the terms of the Transitive Grace Period Public Licence, version 1.0. (You may choose to use this package under the terms of either licence, at your option.) INSTALLATION Tahoe works on Linux, Mac OS X, Windows, Cygwin, and Solaris. For installation instructions please see "docs/install.html" [5]. HACKING AND COMMUNITY Please join us on the mailing list [6] to discuss uses of Tahoe. Patches that extend and improve Tahoe are gratefully accepted -- the RoadMap page [7] shows the next improvements that we plan to make and CREDITS [8] lists the names of people who've contributed to the project. The wiki Dev page [9] contains resources for hackers. SPONSORSHIP Tahoe is sponsored by Allmydata, Inc. [10], a provider of consumer backup services. Allmydata, Inc. contributes hardware, software, ideas, bug reports, suggestions, demands, and money (employing several allmydata.org Tahoe hackers and instructing them to spend part of their work time on this free-software project). We are eternally grateful! Zo
Re: Protection for quasi-offline memory nabbing
At 10:38 AM 3/21/2008 -0700, Jon Callas wrote: Despite that my hypotheses are only that, and I have no experimental data, I think that using a large block cipher mode like EME to induce a pseudo-random, maximally-fragile bit region is an excellent mitigation strategy. Isn't EME patented? - Alex -- Alex Alten [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [p2p-hackers] convergent encryption reconsidered
Jim: Thanks for your detailed response on the convergent encryption issue. In this post, I'll just focus on one very interesting question that you raise: "When do either of these attacks on convergent encryption apply?". In my original note I was thinking about the allmydata.org "Tahoe" Least Authority Filesystem. In this post I will attempt to follow your lead in widening the scope. In particular GNUnet and Freenet are currently active projects that use convergent encryption. The learn-partial-information attack would apply to either system if a user were using it with files that she intended not to divulge, but that were susceptible to being brute-forced in this way by an attacker. On Mar 20, 2008, at 10:56 PM, Jim McCoy wrote: On Mar 20, 2008, at 12:42 PM, zooko wrote: Security engineers have always appreciated that convergent encryption allows an attacker to perform a confirmation-of-a-file attack -- if the attacker already knows the full plaintext of a file, then they can check whether a given user has a copy of that file. The truth of this depends on implementation details, and is an assertion that cannot be said to cover all or even most of the potential use-cases for this technique. You're right. I was writing the above in the context of Tahoe, where, as Brian Warner explained, we do not attempt to hide the linkage between users and ciphertexts. What I wrote above doesn't apply in the general case. However, there is a very general argument about the applicability of these attacks, which is: "Why encrypt?". If your system has strong anonymity properties, preventing people from learning which files are associated with which users, then you can just store the files in plaintext. Ah, but of course you don't want to do that, because even without being linked to users, files may contain sensitive information that the users didn't intend to disclose. But if the files contain such information, then it might be acquired by the learn-partial- information attack. When designing such a system, you should ask yourself "Why encrypt?". You encrypt in order to conceal the plaintext from someone, but if you use convergent encryption, and they can use the learn-partial-information attack, then you fail to conceal the plaintext from them. You should use traditional convergent encryption (without an added secret) if: 1. You want to encrypt the plaintext, and 2. You want convergence, and 3. You don't mind exposing the existence of that file (ignoring the confirmation-of-a-file attack), and 4. You are willing to bet that the file has entropy from the attacker's perspective which is greater than his computational capacity (defeating the learn-partial-information attack). You should use convergent encryption with an added secret (as recently implemented for the Tahoe Least Authority Filesystem) if: 1. You want to encrypt the plaintext, and 2. You want convergence within the set of people who know the added secret, and 3. You don't mind exposing the existence of that file to people in that set, and 4. You are willing to disclose the file to everyone in that set, or else you think that people in that set to whom you do not wish to disclose the file will not try the learn-partial-information attack, or if they do that the file has entropy from their perspective which is greater than their computational capacity. I guess the property of unlinkability between user and file addresses issue 3 in the above list -- the existence of a file is a much less sensitive bit of information than the existence of a file in a particular user's collection. It could also effect issue 4 by increasing the entropy the file has from an attacker's perspective. If he knows that the ciphertext belongs to you then he can try filling in the fields with information that he knows about you. Without that linkage, he has to try filling in the fields with information selected from what he knows about all users. But hiding this linkage doesn't actually help in the case the attacker is already using everything he knows about all users to attack all files in parallel. Note that using an added secret does help in the parallel attack case, because (just like salting passwords) it breaks the space of targets up into separate spaces which can't all be attacked with the same computation. The first problem is isolating the original ciphertext in the pool of storage. If a file is encrypted using convergent encryption and then run through an error-correction mechanism to generate a number of shares that make up the file an attacker first needs to be able to isolate these shares to generate the orginal ciphertext. FEC decoding speeds may be reasonably fast, but they are not without some cost. If the storage pool is sufficiently large and you are doing your job to limit the ability of an attacker to see which
Re: [mm] How is DNSSEC
[EMAIL PROTECTED] wrote: On Sat, Mar 22, 2008 at 03:52:49PM +, Ben Laurie wrote: [EMAIL PROTECTED] wrote: On Sat, Mar 22, 2008 at 02:46:40PM +, Ben Laurie wrote: [EMAIL PROTECTED] wrote: Er... Allow me the option o fdisbeleiving your assertion. PTR records can and do point to mutiple names. Some narrow implementations have assumed that there will only be a single data element and this myth - that PTRs only point to a single name - is and has been spread widely. You can disbelieve my assertion if you wish, but I am only quoting the RFC. RFC 1035, to be precise: "Address nodes are used to hold pointers to primary host names in the normal domain space." (section 3.5. IN-ADDR.ARPA domain). So, the "myth" is in the scripture. ah... open to interpretation. what is a "primary" host name? RFC 1035 does not say, in the case of hosts, but the intent is quite clear from the text on gateways: "Gateways will often have two names in separate domains, only one of which can be primary." the intent for gateways... hosts w/ multiple IP's (VMware etc) are not gateways. comparing oranges w/ dragonfruits. If you insist on language lawyering, I can play. I'd say it is clear from: a) The lack of a repeated PTR record for a host IP in the example, b) The use of the word 'primary', c) The fact that the authors felt it necessary to explain what they saw as an exceptional case, i.e. that a gateway could have two names that in the case of hosts, the authors expected there to only be a single PTR record for reverse lookup. Of course, we have the power to change RFCs. But there's a process for that. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: how to read information from RFID equipped credit cards
Perry E. Metzger wrote: Nothing terribly new here -- short interview with someone who bought an RFID credit card reader on ebay for $8 and demonstrates getting people's credit card information at short distances using it. Still, it is interesting to see how trivial it is to do. http://www.boingboing.net/2008/03/19/bbtv-how-to-hack-an.html Yeah, but... He's talking bollocks when he says that the decryption should be done in some secure datacentre. That wouldn't save you unless there was some kind of handshake with the card - and the trouble is, those cards don't have the power to do any real crypto. In the absence of something to prevent MitM, you would just intercept the encrypted contents of the card, and then use that. So why bother to encrypt it? So, the bottom line is you need more horsepower in the gadget that controls your money, so you can do real crypto. Then we get to the next problem: we don't trust the device with the keypad and display. So, we need to add that to the GTCYM (Gadget That Controls Your Money). And so we end up at the position that we have ended up at so many times before: the GTCYM has to have a decent processor, a keyboard and a screen, and must be portable and secure. One day we'll stop concluding this and actually do something about it. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [mm] How is DNSSEC
[EMAIL PROTECTED] wrote: On Sat, Mar 22, 2008 at 02:46:40PM +, Ben Laurie wrote: [EMAIL PROTECTED] wrote: Er... Allow me the option o fdisbeleiving your assertion. PTR records can and do point to mutiple names. Some narrow implementations have assumed that there will only be a single data element and this myth - that PTRs only point to a single name - is and has been spread widely. You can disbelieve my assertion if you wish, but I am only quoting the RFC. RFC 1035, to be precise: "Address nodes are used to hold pointers to primary host names in the normal domain space." (section 3.5. IN-ADDR.ARPA domain). So, the "myth" is in the scripture. ah... open to interpretation. what is a "primary" host name? RFC 1035 does not say, in the case of hosts, but the intent is quite clear from the text on gateways: "Gateways will often have two names in separate domains, only one of which can be primary." -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [mm] How is DNSSEC
On Sat, Mar 22, 2008 at 02:46:40PM +, Ben Laurie wrote: > [EMAIL PROTECTED] wrote: > > Er... Allow me the option o fdisbeleiving your assertion. > > PTR records can and do point to mutiple names. Some narrow > > implementations have assumed that there will only be a single > > data element and this myth - that PTRs only point to a single > > name - is and has been spread widely. > > You can disbelieve my assertion if you wish, but I am only quoting the > RFC. RFC 1035, to be precise: > > "Address nodes are used to hold pointers to primary host names > in the normal domain space." > > (section 3.5. IN-ADDR.ARPA domain). So, the "myth" is in the scripture. ah... open to interpretation. what is a "primary" host name? --bill > > -- > http://www.apache-ssl.org/ben.html http://www.links.org/ > > "There is no limit to what a man can do or how far he can go if he > doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [mm] How is DNSSEC
[EMAIL PROTECTED] wrote: Er... Allow me the option o fdisbeleiving your assertion. PTR records can and do point to mutiple names. Some narrow implementations have assumed that there will only be a single data element and this myth - that PTRs only point to a single name - is and has been spread widely. You can disbelieve my assertion if you wish, but I am only quoting the RFC. RFC 1035, to be precise: "Address nodes are used to hold pointers to primary host names in the normal domain space." (section 3.5. IN-ADDR.ARPA domain). So, the "myth" is in the scripture. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How is DNSSEC
On Sat, Mar 22, 2008 at 10:59:18AM +, Ben Laurie wrote: > [EMAIL PROTECTED] wrote: > >On Fri, Mar 21, 2008 at 08:52:07AM +1000, James A. Donald wrote: > >>From time to time I hear that DNSSEC is working fine, and on examining > >>the matter I find it is "working fine" except that > >> > >>Seems to me that if DNSSEC is actually working fine, I should be able to > >>provide an authoritative public key for any domain name I control, and > >>should be able to obtain such keys for other domain names, and use such > >>keys for any purpose, not just those purposes envisaged in the DNSSEC > >>specification. Can I? It is not apparent to me that I can. > > > > > > actually, the DNSSEC specification -used- to support > > keys for "any purpose", and in theory you could use > > DNSSEC keys in that manner. However a bit of careful > > thought suggests that there is potential disconnect btwn > > the zone owner/admin who creates/distributes the keys as > > a token of the integrity and authenticity of the data in > > the DNS, and the owner/admin of the node to which the DNS > > data points. > > So far, so good. This disconnect doesn't seem to have done the CA > industry any harm, though. The CA business -is- to serve as a "notary" They attest to the binding o fthe key to holder. To date, thats NOT what a zone admin does, he is attesting that its HIS key, that it is HIS record in HIS database. Just because he has sold the right to use to someone else, is still his database and his data. Unless of course Nominet (for example) is now going to allow client driven dynamic updates - where the clients are in complete control of their data. (thats closer to James, assertion that he owns/controls his domain name) > > > Remember that while you may control your forward > > name (and not many people actually run their own DNS servers) > > it is less likely that you run your address maps - and for > > the paranoid, you would want to ensure the forward and > > reverse zones are signed and at the intersection, there is > > a common data element which you can use. > > Non sequiteur, plus I can't see why paranoia would prompt me to want to > do this? What does it prove? The argument is, again, to James asserton that he owns his domain name. In point of fact, every node has at least two "names" in the DNS, the forward (which gets most of the attention) and the reverse - which is nearly always controled by your ISP. DNSSEC validation along one path in the DNS graph is reassuring (or so it is claimed). I posit that validation over two, generally non-overlapping administrative spheres of influence, in the DNS graph would give a higher level of assurance. Couple this with finding the identical x509 cert at the origin of the validation chain for both paths - and I think I have a much higher confidence that I am actually going to be sending packets to the "right" node. > Also, PTR records are only supposed to point to "primary domain names". > Since it is common for hosts to have many names resolving to the same IP > address, by definition most of these will not correspond to the reverse > lookup. Er... Allow me the option o fdisbeleiving your assertion. PTR records can and do point to mutiple names. Some narrow implementations have assumed that there will only be a single data element and this myth - that PTRs only point to a single name - is and has been spread widely. > > To do what you want, want, you might consider using the > > CERT-rr, using the DNS to distribute host-specific keys/certs. > > And to ensure that the data in the DNS was not tampered with, > > using DNSSEC signed zones with CERT-rr's would not be a bad > > thing. In fact, thats what we are testing . > > Who is "we" and what exactly are you testing? We is USMIR, the registry for .UM - www.nic.um --bill > > Cheers, > > Ben. > > -- > http://www.apache-ssl.org/ben.html http://www.links.org/ > > "There is no limit to what a man can do or how far he can go if he > doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How is DNSSEC
[EMAIL PROTECTED] wrote: On Fri, Mar 21, 2008 at 08:52:07AM +1000, James A. Donald wrote: From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is "working fine" except that Seems to me that if DNSSEC is actually working fine, I should be able to provide an authoritative public key for any domain name I control, and should be able to obtain such keys for other domain names, and use such keys for any purpose, not just those purposes envisaged in the DNSSEC specification. Can I? It is not apparent to me that I can. actually, the DNSSEC specification -used- to support keys for "any purpose", and in theory you could use DNSSEC keys in that manner. However a bit of careful thought suggests that there is potential disconnect btwn the zone owner/admin who creates/distributes the keys as a token of the integrity and authenticity of the data in the DNS, and the owner/admin of the node to which the DNS data points. So far, so good. This disconnect doesn't seem to have done the CA industry any harm, though. Remember that while you may control your forward name (and not many people actually run their own DNS servers) it is less likely that you run your address maps - and for the paranoid, you would want to ensure the forward and reverse zones are signed and at the intersection, there is a common data element which you can use. Non sequiteur, plus I can't see why paranoia would prompt me to want to do this? What does it prove? Also, PTR records are only supposed to point to "primary domain names". Since it is common for hosts to have many names resolving to the same IP address, by definition most of these will not correspond to the reverse lookup. To do what you want, want, you might consider using the CERT-rr, using the DNS to distribute host-specific keys/certs. And to ensure that the data in the DNS was not tampered with, using DNSSEC signed zones with CERT-rr's would not be a bad thing. In fact, thats what we are testing . Who is "we" and what exactly are you testing? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How is DNSSEC
James A. Donald wrote: From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is "working fine" except that Seems to me that if DNSSEC is actually working fine, I should be able to provide an authoritative public key for any domain name I control, and should be able to obtain such keys for other domain names, and use such keys for any purpose, not just those purposes envisaged in the DNSSEC specification. Can I? It is not apparent to me that I can. There are two major issues with DNSSEC right now. Neither of them is that it isn't working. Firstly, the root is not signed. This means there's no easy way for the relying party to establish the correctness of the key on your domain. Secondly, although we have DNS servers and resolvers, software that uses DNS is largely unaware of DNSSEC and so has absolutely no idea what to do when one of the many possible cryptographic/proof failures occurs. Very little thought has gone into what should be done, even in software that is aware. That said, if you want to distribute keys with DNSSEC, then RFC 4398 standardises ways to do a number of them, and can be extended to cover more. RFC 4255 gives you SSH host keys, too. If you want to do something ad hoc, then there are always TXT records, though I guarantee this will make the DNS people hate you forever. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How is DNSSEC
* James A. Donald: > From time to time I hear that DNSSEC is working fine, and on examining > the matter I find it is "working fine" except that > > Seems to me that if DNSSEC is actually working fine, I should be able > to provide an authoritative public key for any domain name I control, > and should be able to obtain such keys for other domain names, and use > such keys for any purpose, not just those purposes envisaged in the > DNSSEC specification. Can I? It is not apparent to me that I can. DNS is hierarchical. Nobody wants the DoD (who are traditionally quite good at keeping secret data) or any other institution to keep keys at important positions in the hierarchy. And nobody wants to be the keep irreplaceable keys, either, which makes introduction at levels below the DNS root difficult. This is not a problem with the browser PKI because it's possible to replace root certificates with a software update (which can be automated in many cases). And as Bill pointed out, it's not possible to use the DNS keys directly. However, you can bootstrap another key based on data from DNS. This even works without DNSSEC. DKIM does that, for instance. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How is DNSSEC
James A. Donald wrote: From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is "working fine" except that DNSSEC is "working fine" as a technology. However, it is worth remembering that it works based on digitally signing an entire zone - the state of the world being what it is, most people prohibit xfer so any other technology that would allow a zonewalk is not going to be deployed. as far as I can tell, this is a basic design flaw, so isn't going to be rectified anytime soon. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]