how to read information from RFID equipped credit cards
Nothing terribly new here -- short interview with someone who bought an RFID credit card reader on ebay for $8 and demonstrates getting people's credit card information at short distances using it. Still, it is interesting to see how trivial it is to do. http://www.boingboing.net/2008/03/19/bbtv-how-to-hack-an.html -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Firewire threat to FDE
Hagai Bar-El wrote on 18 March 2008 10:17: All they need to do is make sure (through a user-controlled but default-on feature) that when the workstation is locked, new Firewire or PCMCIA devices cannot be introduced. That hard? Yes it is, without redesigning the PCI bus. A bus-mastering capable device doesn't need any interaction with or acknowledgement from the host, it doesn't need any driver to be loaded and running, it just needs electrical connectivity in order to control the entire system. (I suppose you could disable the BAR mappings when you go to locked mode, but that's liable to mess up any integrated graphics set that uses system memory for the frame buffer, and you'd better not lock your terminal while your SCSI drives are in operation...) cheers, DaveK -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: NSA approves secure smart phone
Steven M. Bellovin wrote: http://www.gcn.com/online/vol1_no1/45946-1.html http://www.gdc4s.com/documents/D-SMEPED-6-1007_p21.pdf - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: NSA approves secure smart phone
Steven M. Bellovin wrote: http://www.gcn.com/online/vol1_no1/45946-1.html http://www.afcea.org/signal/articles/templates/Signal_Article_Template.asp?articleid=1346zoneid=210 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Protection for quasi-offline memory nabbing
I've been thinking about similar issues. It seems to me that just destroying the key schedule is a big help -- enough bits will change in the key that data recovery using just the damaged key is hard, per comments in the paper itself. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Firewire threat to FDE
On Wed, Mar 19, 2008 at 02:25:36PM -0400, Leichter, Jerry wrote: [This has been thrashed out on other lists.] Just how would that help? As I understand it, Firewire and PCMCIA provide a way for a device to access memory directly. The OS doesn't have to do anything - in fact, it *can't* do anything. The OS can program the Firewire controller not to allow DMA. The only possible protection here is at the hardware level: The external interface controller must be able to run in a mode which blocks externally-initiated memory transactions. Unfortunately, that may not be possible for some controllers. Sure, the rules for (say) SCSI might say that a target is only supposed to begin sending after a request from an initiator - but it would take a rather sophisticated state machine to make sure to match things up properly, especially on a multi-point bus. Isn't what you're describing here an IOMMU? David. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Protection for quasi-offline memory nabbing
On Tue, Mar 18, 2008 at 09:46:45AM -0700, Jon Callas wrote: What operates like a block cipher on a large chunk? Tweakable modes like EME. Or as a non-patented alternative one could use the Bear/Lion constructions [1], which can encrypt arbitrary size blocks at reasonably good speeds (depending on the performance characteristics of the stream cipher and hash function they are instantiated with). -Jack [1] http://www.cl.cam.ac.uk/~rja14/Papers/bear-lion.pdf - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
convergent encryption reconsidered
(This is an ASCII rendering of https://zooko.com/ convergent_encryption_reconsidered.html .) Convergent Encryption Reconsidered Written by Zooko Wilcox-O'Hearn, documenting ideas due to Drew Perttula, Brian Warner, and Zooko Wilcox-O'Hearn, 2008-03-20. Abstract Convergent encryption is already known to suffer from a confirmation-of-a-file attack. We show that it suffers also from a learn-partial-information attack. The conditions under which this attack works cannot be predicted by a computer program nor by an unsophisticated user. We propose a solution which trades away part of the space savings benefits of convergent encryption in order to prevent this new attack. Our defense also prevents the old attack. The issues are presented in the context of the Tahoe Least-AUthority Grid File System, a secure decentralized filesystem. Background -- The Confirmation-Of-A-File Attack Convergent encryption, also known as content hash keying, was first mentioned by John Pettitt on the cypherpunks list in 1996 [1], was used by Freenet [2] and Mojo Nation [3] in 2000, and was analyzed in a technical report by John Doceur et al. in 2002 [4]. Today it is used by at least Freenet, GNUnet [5], flud [6], and the Tahoe Least-AUthority Grid File System [7]. The remainder of this note will focus on the Tahoe LAUGFS filesystem. The use of convergent encryption in other systems may have different consequences than described here, because of the different use cases or added defenses that those systems may have. Convergent encryption is simply encrypting a file using a symmetric encryption key which is the secure hash of the plaintext of the file. Security engineers have always appreciated that convergent encryption allows an attacker to perform a confirmation-of-a-file attack -- if the attacker already knows the full plaintext of a file, then they can check whether a given user has a copy of that file. Whether this confirmation-of-a-file attack is a security or privacy problem depends on the situation. If you want to store banned books or political pamphlets without attracting the attention of an oppressive government, or store pirated copies of music or movies without attracting the attention of copyright holders, then the confirmation-of-a-file attack is potentially a critical problem. On the other hand, if the sensitive parts of your data are secret personal things like your bank account number, passwords, and so forth, then it isn't a problem. Or so I -- and as far as I know everyone else -- thought until March 16, 2008. I had planned to inform users of the current version of Tahoe -- version 0.9.0 -- about the confirmation-of-a-file attack by adding a FAQ entry: Q: Can anyone else see the contents of files that I have not shared? A: The files that you store are encrypted so that nobody can see a file's contents (unless of course you intentionally share the file with them). However, if the file that you store is something that someone has already seen, such as if it is a file that you downloaded from the Internet in the first place, then they can recognize it as being the same file when you store it, even though it is encrypted. So basically people can tell which files you are storing if they are publically known files, but they can't learn anything about your own personal files. However, four days ago (on March 16, 2008) Drew Perttula and Brian Warner came up with an attack that shows that the above FAQ is wrong. The Learn-Partial-Information Attack They extended the confirmation-of-a-file attack into the learn-partial-information attack. In this new attack, the attacker learns some information from the file. This is done by trying possible values for unknown parts of a file and then checking whether the result matches the observed ciphertext. For example, if you store a document such as a form letter from your bank, which contains a few pages of boilerplate legal text plus a few important parts, such as your bank account number and password, then an attacker who knows the boilerplate might be able to learn your account number and password. For another example, if you use Tahoe to backup your entire home directory, or your entire filesystem, then the attacker gains the opportunity to try to learn partial information about various files which are of predictable format but have sensitive fields in them, such as .my.cnf (MySQL configuration files), .htpasswd, .cvspass, .netrc, web browser cookie files, etc.. In some cases, files such as these will contain too much entropy from the perspective of the attacker to allow this attack, but in other cases the attacker will know, or be able to guess, most of the fields, and brute force
How is DNSSEC
From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is working fine except that Seems to me that if DNSSEC is actually working fine, I should be able to provide an authoritative public key for any domain name I control, and should be able to obtain such keys for other domain names, and use such keys for any purpose, not just those purposes envisaged in the DNSSEC specification. Can I? It is not apparent to me that I can. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Fwd: [tahoe-dev] [p2p-hackers] convergent encryption reconsidered
Dear Perry Metzger: Jim McCoy asked me to forward this, as he is not subscribed to cryptography@metzdowd.com, so his posting bounced. Regards, Zooko Begin forwarded message: From: Jim McCoy [EMAIL PROTECTED] Date: March 20, 2008 10:56:58 PM MDT To: theory and practice of decentralized computer networks p2p- [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], Cryptography cryptography@metzdowd.com Subject: Re: [tahoe-dev] [p2p-hackers] convergent encryption reconsidered Reply-To: [EMAIL PROTECTED] On Mar 20, 2008, at 12:42 PM, zooko wrote: Security engineers have always appreciated that convergent encryption allows an attacker to perform a confirmation-of-a-file attack -- if the attacker already knows the full plaintext of a file, then they can check whether a given user has a copy of that file. The truth of this depends on implementation details, and is an assertion that cannot be said to cover all or even most of the potential use-cases for this technique. This property only holds if it is possible for the attacker to link a selected ciphertext/file to a user. Systems which use convergent encryption to populate a shared storage pool _might_ have this property, but is my no means a certainty; if a system is implemented correctly is is not necessary for users to expose their list of files in order to maintain this shared storage space. basically people can tell which files you are storing if they are publically known files, but they can't learn anything about your own personal files. It sounds like you have a design problem. If nodes that participate in the system can distinguish between publication and _re_- publication/ replication (or whatever you want to call the random sharing of arbitrary data blocks for the purposes of increasing file availability) then you have a problem. If these two activities are indistinguishable then an observer knows you have some blocks to a file but should not be able to distinguish between you publishing the blocks and the act of re-distribution to increase block availability. The Learn-Partial-Information Attack [...] A better title for this would be Chosen-Plaintext attack on Convergent Encryption, since what you are talking about is really a chosen plaintext attack. To be a bit more specific, this is really just a version of a standard dictionary attack. The solution to this problem is to look at similar systems that suffered from dictionary attacks an see what solutions were created to solve the problem. The most widely known and studied version of this is the old crypt()/ passwd problem. For another example, if you use Tahoe to backup your entire home directory, or your entire filesystem, then the attacker gains the opportunity to try to learn partial information about various files which are of predictable format but have sensitive fields in them, such as .my.cnf (MySQL configuration files), .htpasswd, .cvspass, .netrc, web browser cookie files, etc.. The problem with this imagined attack are twofold. I will use your Tahoe example for my explanations because I have a passing familiarity with the architecture. The first problem is isolating the original ciphertext in the pool of storage. If a file is encrypted using convergent encryption and then run through an error-correction mechanism to generate a number of shares that make up the file an attacker first needs to be able to isolate these shares to generate the orginal ciphertext. FEC decoding speeds may be reasonably fast, but they are not without some cost. If the storage pool is sufficiently large and you are doing your job to limit the ability of an attacker to see which blocks are linked to the same FEC operation then the computational complexity of this attack is significantly higher than you suggest. Assuming an all-seeing oracle who can watch every bit sent into the storage pool will get us around this first problem, but it does raise the bar for potential attackers. The second problem an attacker now faces is deciding what sort of format a file might have, what the low-entropy content might be, and then filling in values for these unknowns. If your block size is small (and I mean really small in the context of the sort of systems we are talking about) there might be only a few kilobits of entropy in the first couple of blocks of a file so either a rainbow-table attack on known file formats or a dedicated effort to grab a specific file might be possible, but this is by no means certain. Increase your block size and this problem becomes much harder for the attacker. Defense Against Both Attacks [...] However, we can do better than that by creating a secret value and mixing that value into the per-file encryption key (so instead of symmetric_key = H(plaintext), you have symmetric_key = H(added_secret, plaintext), where , denotes an unambiguous encoding of both operands). This idea is due to Brian Warner and Drew Perttula.
Center for Cryptologic History at the National Security Agency: Schorreck Memorial Lecture
Professor Christopher Andrew to present Schorreck Memorial Lecture, April 7, 2008 at 2:00 PM, Laurel, MD The Center for Cryptologic History at the National Security Agency is pleased to announce a lecture by Professor Christopher Andrew of Cambridge University, author of numerous books on intelligence history. Professor Andrew will present the Second Henry F. Schorreck Memorial Lecture on April 7 at 2 PM. This annual series, named for the long-time NSA Historian, began in 2007 when Dr. David Kahn, author of The Codebreakers, presented a talk on The Future of the Past. Professor Andrew will speak on British Intelligence, the American Alliance, and the End of the British Empire. The lecture will be presented at the Kossiakoff Conference Center on the campus of the Johns Hopkins Applied Physics Laboratory (located just off U.S. Route 29 at Johns Hopkins Road -- information about the facility and directions can be found at: https://mail.corp.nsa.gov/exchweb/bin/redir.asp?URL=http://www.jhuapl.edu/ Admission is free, but advance registration is required. Those wishing to attend should send an e-mail to the Center for Cryptologic History at [EMAIL PROTECTED] Please call the Center at 301-688-2336 if you have any questions or need additional information. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Protection for quasi-offline memory nabbing
On Mar 19, 2008, at 6:56 PM, Steven M. Bellovin wrote: I've been thinking about similar issues. It seems to me that just destroying the key schedule is a big help -- enough bits will change in the key that data recovery using just the damaged key is hard, per comments in the paper itself. It is. That's something everyone should consider doing. However, I was struck by the decay curves shown in the Cold Boot paper. The memory decays in an S-curve. Interestingly, both the smoothest S-curve and the sharpest were in the most recent equipment. However, this suggests that for a relatively small object (like a 256- bit key) is apt to see little damage. If you followed the strategy of checking for single-bit errors, then double-bit, then triple-bit, I hypothesize that this simple strategy would be productive, because of that curve. (I also have a few hypotheses on which bits will go first. I hypothesize that a high-power bit surrounded by low-power ones will go first, and a low-power bit amongst high-power ones will go last. I also hypothesize that a large random area is reasonably likely to get an early single-bit error. My rationale is that the area as a whole is going to have relatively high power 'consumption' because it is random, but the random area is going to have local artifacts that will hasten a local failure. Assuming that 1 is high-power and 0 is low- power, you expect to see a bitstring of 00100 or 0001000 relatively often in a blob of 32kbits (4KB) or 64kbits (8KB), and those lonely ones will have a lot of stress on them.) Despite that my hypotheses are only that, and I have no experimental data, I think that using a large block cipher mode like EME to induce a pseudo-random, maximally-fragile bit region is an excellent mitigation strategy. Now all we need is someone to do the work and write the paper. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: convergent encryption reconsidered
|...Convergent encryption renders user files vulnerable to a |confirmation-of-a-file attack. We already knew that. It also |renders user files vulnerable to a learn-partial-information |attack in subtle ways. We didn't think of this until now. My |search of the literature suggests that nobody else did either. The way obvious in retrospect applies here: The vulnerability is closely related to the power of probable plaintext attacks against systems that are thought to be vulnerable only to known plaintext attacks. The general principle that needs to be applied is: In any cryptographic setting, if knowing the plaintext is sufficient to get some information out of the system, then it will also be possible to get information out of the system by guessing plaintext - and one must assume that there will be cases where such guessing is easy enough. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How is DNSSEC
On Fri, Mar 21, 2008 at 08:52:07AM +1000, James A. Donald wrote: From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is working fine except that Seems to me that if DNSSEC is actually working fine, I should be able to provide an authoritative public key for any domain name I control, and should be able to obtain such keys for other domain names, and use such keys for any purpose, not just those purposes envisaged in the DNSSEC specification. Can I? It is not apparent to me that I can. actually, the DNSSEC specification -used- to support keys for any purpose, and in theory you could use DNSSEC keys in that manner. However a bit of careful thought suggests that there is potential disconnect btwn the zone owner/admin who creates/distributes the keys as a token of the integrity and authenticity of the data in the DNS, and the owner/admin of the node to which the DNS data points. Remember that while you may control your forward name (and not many people actually run their own DNS servers) it is less likely that you run your address maps - and for the paranoid, you would want to ensure the forward and reverse zones are signed and at the intersection, there is a common data element which you can use. To do what you want, want, you might consider using the CERT-rr, using the DNS to distribute host-specific keys/certs. And to ensure that the data in the DNS was not tampered with, using DNSSEC signed zones with CERT-rr's would not be a bad thing. In fact, thats what we are testing . - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]