Re: Warning! New cryptographic modes!
On May 11, 2009, at 7:06 PM, silky wrote: How about this. When you modify a file, the backup system attempts to see if it can summarise your modifications into a file that is, say, less then 50% of the file size. So if you modify a 10kb text file and change only the first word, it will encrypt that component (the word you changed) on it's own, and upload that seperate to the file. On the other end, it will have a system to merging these changes when a file is decrypted. It will actually be prepared and decrypted (so all operations of this nature must be done *within* the system). Then, when it reaches a critical point in file changes, it can just upload the entire file new again, and replace it's base copy and all the parts. Slightly more difficult with binary files where the changes are spread out over the file, but if these changes can still be summarised relatively trivially, it should work. To do this, the backup system needs access to both the old and new version of the file. rsync does, because it is inherently sync'ing two copies, usually on two different systems, and we're doing this exactly because we *want* that second copy. If you want the delta computation to be done locally, you need two local copies of the file - doubling your disk requirements. In principle, you could do this only at file close time, so that you'd only need such a copy for files that are currently being written or backed up. What happens if the system crashes after it's updated but before you can back it up? Do you need full data logging? Victor Duchovni suggested using snapshots, which also give you the effect of a local copy - but sliced differently, as it were, into blocks written to the file system over some defined period of time. Very useful, but both it and any other mechanism must sometimes deal with worst cases - an erase of the whole disk, for example; or a single file that fills all or most of the disk. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
On Tue, May 12, 2009 at 10:39 AM, Jerry Leichter leich...@lrw.com wrote: On May 11, 2009, at 8:27 PM, silky wrote: The local version needs access to the last committed file (to compare the changes) and the server version only keeps the 'base' file and the 'changes' subsets. a) What's a committed file. I'm thinking an SVN-style backup system. When you're done with all your editing, you just commit the changes to go into the backup. As part of the commit operation, it decides on the amount of changes you've done and whether it warrants an entire re-encrypt and upload, or whether a segment can be done. b) As in my response to Victor's message, note that you can't keep a base plus changes forever - eventually you need to resend the base. And you'd like to do that efficiently. As discussed in my original post, the base is reset when the changes are greater then 50% of the size of the original file. So yes, it does increase the amount of space required locally (not a lot though, unless you are changing often and not committing), Some files change often. There are files that go back and forth between two states. (Consider a directory file that contains a lock file for a program that runs frequently.) The deltas may be huge, but they all collapse! In that specific case, say and MS Access lock file, it can obviously be ignored by the entire backup process. and will also increase the amount of space required on the server by 50%, but you need pay the cost somewhere, and I think disk space is surely the cheapest cost to pay. A large percentage increase - and why 50%? - scales up with the amount of storage. There are, and will for quite some time continue to be, applications that are limited by the amount of disk one can afford to throw at them. Such an approach drops the maximum size of file the application can deal with by 50%. No reason for 50%, it can (and should) be configurable. The point was to set the time at which the base file would be reset. I'm not sure what cost you think needs to be paid here. Ignoring encryption, an rsync-style algorithm uses little local memory (I think the standard program uses more than it has to because it always works on whole files; it could subdivide them) and transfers close to the minimum you could possibly transfer. The cost of not transferring a entirely new encrypted file just because of a minor change. -- noon silky - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
On May 11, 2009, at 7:08 PM, Matt Ball wrote: Practically, to make this work, you'd want to look at the solutions that support 'data deduplication' (see http://en.wikipedia.org/wiki/Data_deduplication). These techniques typically break the data into variable length 'chunks', and de-duplicate by computing the hash of these chunks and comparing to the hashes of chunks already stored in the system. These chunks provide a useful encryption unit, but they're still somewhat susceptible to traffic analysis. The communication should additionally be protected by SSH, TLS, or IPsec to reduce the exposure to traffic analysis. It's interesting that data-dedup-friendly modes inherently allow an attacker to recognize duplicated plaintext based only on the ciphertext. That's their whole point. But this is exactly the primary weakness of ECB mode. It's actually a bit funny: ECB mode lets you recognize repetitions of what are commonly small, probably semantically meaningless, pieces of plaintext. Data-dedup-friendly modes let you recognize repetitions of what are commonly large chunks of semantically meaningful plaintext. Yet we reject ECB as insecure but accept the insecurity of data-dedup-friendly modes because they are so useful! -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
How about this. When you modify a file, the backup system attempts to see if it can summarise your modifications into a file that is, say, less then 50% of the file size. So if you modify a 10kb text file and change only the first word, it will encrypt that component (the word you changed) on it's own, and upload that seperate to the file. On the other end, it will have a system to merging these changes when a file is decrypted. It will actually be prepared and decrypted (so all operations of this nature must be done *within* the system). Then, when it reaches a critical point in file changes, it can just upload the entire file new again, and replace it's base copy and all the parts. Slightly more difficult with binary files where the changes are spread out over the file, but if these changes can still be summarised relatively trivially, it should work. -- silky - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Fwd: cryptohippie: the electronic police state ranking 2008
Begin forwarded message: From: Eugen Leitl eu...@leitl.org Date: May 12, 2009 11:51:13 AM GMT-04:00 To: i...@postbiota.org, cypherpu...@al-qaeda.net, t...@postbiota.org Subject: cryptohippie: the electronic police state ranking 2008 https://secure.cryptohippie.com/pubs/EPS-2008.pdf The Electronic Police State 2008 National Rankings Most of us are aware that our governments monitor nearly every form of electronic communication. We are also aware of private companies doing the same. This strikes most of us as slightly troubling, but very few of us say or do much about it. There are two primary reasons for this: 1. We really donbt see how it is going to hurt us. Mass surveillance is certainly a new, odd, and perhaps an ominous thing, but we just donbt see a complete picture or a smoking gun. 2. We are constantly surrounded with messages that say, bOnly crazy people complain about the government.b However, the biggest obstacle to our understanding is this: The usual image of a bpolice stateb includes secret police dragging people out of their homes at night, with scenes out of Nazi Germany or Stalinbs USSR. The problem with these images is that they are horribly outdated. Thatbs how things worked during your grandfatherbs war b that is not how things work now. An electronic police state is quiet, even unseen. All of its legal actions are supported by abundant evidence. It looks pristine. An electronic police state is characterized by this: State use of electronic technologies to record, organize, search and distribute forensic evidence against its citizens. The two crucial facts about the information gathered under an electronic police state are these: 1. It is criminal evidence, ready for use in a trial. 2. It is gathered universally and silently, and only later organized for use in prosecutions. In an Electronic Police State, every surveillance camera recording, every email you send, every Internet site you surf, every post you make, every check you write, every credit card swipe, every cell phone ping... are all criminal evidence, and they are held in searchable databases, for a long, long time. Whoever holds this evidence can make you look very, very bad whenever they care enough to do so. You can be prosecuted whenever they feel like it b the evidence is already in their database. Perhaps you trust that your ruler will only use his evidence archives to hurt bad people. Will you also trust his successor? Do you also trust all of his subordinates, every government worker and every policeman? And, if some leader behaves badly, will you really stand up to oppose him or her? Would you still do it if he had all the emails you sent when you were depressed? Or if she has records of every porn site youbve ever surfed? Or if he knows every phone call youbve ever made? Or if she knows everyone youbve ever sent money to? Such a person would have all of this and more b in the form of court-ready evidence b sitting in a database, waiting to be organized at the touch of a button. This system hasnbt yet reached its full shape, but all of the basics are in place and it is not far from complete in some places. It is too late to prevent this b it is here. Our purpose in producing this report is to let people know that their liberty is in jeopardy and to help them understand how it is being undermined. OUR RANKINGS Firstly, we are not measuring government censorship of Internet traffic or police abuses, as legitimate as these issues may be. And, we are not including evidence gathering by traditional, honest police work in any of the categories below. (That is, searches pursuant to honestly obtained warrants b issued by an independent judge, and only after the careful examination of evidence.) The seventeen factors we included in these rankings are: Daily Documents Requirement of state-issued identity documents and registration. Border Issues Inspections at borders, searching computers, demanding decryption of data. Financial Tracking Statebs ability to search and record all financial transactions: Checks, credit card use, wires, etc. Gag Orders Criminal penalties if you tell someone the state is searching their records. Anti-Crypto Laws Outlawing or restricting cryptography. Constitutional Protection A lack of constitutional protections for the individual, or the overriding of such protections. Data Storage Ability The ability of the state to store the data they gather. Data Search Ability The ability to search the data they gather. ISP Data Retention States forcing Internet Service Providers to save detailed records of all their customersb Internet usage. Telephone Data Retention States forcing telephone companies to record and save records of all their customersb telephone usage. Cell Phone Records States forcing cellular telephone companies to record and save
Re: Warning! New cryptographic modes!
On Mon, May 11, 2009 at 2:54 PM, Jerry Leichter leich...@lrw.com wrote: On May 11, 2009, at 2:16 PM, Roland Dowdeswell wrote: On 1241996128 seconds since the Beginning of the UNIX epoch Jerry Leichter wrote: I'm not convinced that a stream cipher is appropriate here because if you change the data then you'll reveal the plaintext. Well, XOR of old a new plaintext. But point taken. Sounds like this might actually be an argument for a stream cipher with a more sophisticated combiner than XOR. (Every time I've suggested that, the response has been That doesn't actually add any strength, so why bother- The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
Jerry Leichter wrote: Consider first just updates. Then you have exactly the same problem as for disk encryption: You want to limit the changes needed in the encrypted image to more or less the size of the change to the underlying data. Generally, we assume that the size of the encrypted change for a given contiguous range of changed underlying bytes is bounded roughly by rounding the size of the changed region up to a multiple of the blocksize. This does reveal a great deal of information, but there isn't any good alternative. You specified a good alternative: Encrypted synchronization of a file versioning system: Git runs under SSH. Suppose the files are represented as the original values of the files, plus deltas. If the originals are encrypted, and the deltas encrypted, no information is revealed other than the size of the change. Git is scriptable, write a script to do the job. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
Jerry Leichter wrote: To support insertions or deletions of full blocks, you can't make the block encryption depend on the block position in the file, since that's subject to change. For a disk encryptor that can't add data to the file, that's a killer; for an rsync pre-processor, it's no big deal - just store the necessary key-generation or tweak data with each block. This has no effect on security - the position data was public anyway. That is basically what I'm doing in adding encryption to ZFS[1]. Each ZFS block in an encrypted dataset is encrypted with a separate IV and has its own AES-CCM MAC both of which are stored in the block pointer (the whole encrypted block is then checksumed with an unkeyed SHA256 which forms a merkle tree). To handle smaller inserts or deletes, you need to ensure that the underlying blocks get back into sync. The gzip technique I mentioned earlier works. Keep a running cryptographically secure checksum over the last blocksize bytes. ZFS already supports gzip compression but only does so on ZFS blocks not on files so it doesn't need to do this trick. The downside is we don't get as good a compression as when you can look at the whole file. ZFS has its own replication system in its send/recv commands (which take a ZFS dataset and produce either a full or delta between snapshots object change list). My plan for this is to be able to send the per block changes as ciphertext so that we don't have to decrypt and re-encrypt the data. Note this doesn't help rsync though since the stream format is specific to ZFS. [1] http://opensolaris.org/os/project/zfs-crypto/ -- Darren J Moffat - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
On Tue, May 12, 2009 at 10:22 AM, Jerry Leichter leich...@lrw.com wrote: On May 11, 2009, at 7:06 PM, silky wrote: How about this. When you modify a file, the backup system attempts to see if it can summarise your modifications into a file that is, say, less then 50% of the file size. So if you modify a 10kb text file and change only the first word, it will encrypt that component (the word you changed) on it's own, and upload that seperate to the file. On the other end, it will have a system to merging these changes when a file is decrypted. It will actually be prepared and decrypted (so all operations of this nature must be done *within* the system). Then, when it reaches a critical point in file changes, it can just upload the entire file new again, and replace it's base copy and all the parts. Slightly more difficult with binary files where the changes are spread out over the file, but if these changes can still be summarised relatively trivially, it should work. To do this, the backup system needs access to both the old and new version of the file. rsync does, because it is inherently sync'ing two copies, usually on two different systems, and we're doing this exactly because we *want* that second copy. The local version needs access to the last committed file (to compare the changes) and the server version only keeps the 'base' file and the 'changes' subsets. So yes, it does increase the amount of space required locally (not a lot though, unless you are changing often and not committing), and will also increase the amount of space required on the server by 50%, but you need pay the cost somewhere, and I think disk space is surely the cheapest cost to pay. If you want the delta computation to be done locally, you need two local copies of the file - doubling your disk requirements. In principle, you could do this only at file close time, so that you'd only need such a copy for files that are currently being written or backed up. What happens if the system crashes after it's updated but before you can back it up? Do you need full data logging? I think this is resolved by saving only the last committed. Victor Duchovni suggested using snapshots, which also give you the effect of a local copy - but sliced differently, as it were, into blocks written to the file system over some defined period of time. Very useful, but both it and any other mechanism must sometimes deal with worst cases - an erase of the whole disk, for example; or a single file that fills all or most of the disk. -- Jerry -- noon silky - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Solving password problems one at a time, Re: The password-reset paradox
On 05/09/09 07:33, Jerry Leichter wrote: I had a discussion with a guy at a company that was proposing to create secure credit cards by embedding a chip in the card and replacing some number of digits with an LCD display. The card would generate a unique card number for you when needed. They actually had the technology working - the card was pretty much indistinguishable from any other. (Of course, how rugged it would be in typical environments is another question - but they claimed they had a solution.) Deloitte staff trial Visa card with built in OTP generator for IT access control http://www.finextra.com/fullstory.asp?id=20019 -- 40+yrs virtualization experience (since Jan68), online at home since Mar1970 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Significance of Schnorr's Factoring Integers in Polynomial Time?
I have three brief comments. 1) The main theorem assumes that we can find a vector of length ≤ \sqrt{2eπ} n^b λ_1 In general, this is not possible in polynomial time, esp. for small b. 2) NEW ENUM takes time exponential in n unless b is very small such that n^b is eliminated by rd(L). 3) GSA does not hold in general. So, even if everything else checks out, the argument might break down here. We need to see the full result to say more. But in the meantime, let's not be afraid ;-) Cheers, Markus Am Montag, 11. Mai 2009 16:47:36 schrieb Ralf-Philipp Weinmann: Wanna reply? -RPW -- Forwarded message -- From: Francois Grieu fgr...@gmail.com Date: Sun, May 10, 2009 at 3:29 PM Subject: Significance of Schnorr's Factoring Integers in Polynomial Time? To: cryptography@metzdowd.com At the rump session of Eurocrypt 2009, http://eurocrypt2009rump.cr.yp.to/ Claus P. Schnorr reportedly presented slides titled Average Time Fast SVP and CVP Algorithms: Factoring Integers in Polynomial Time http://eurocrypt2009rump.cr.yp.to/e074d37e10ad1ad227200ea7ba36cf73.pdf I hardly understand 1/4 of the mathematical notation used, and can't even be sure that the thing is not a (very well done) prank. Anyone one the list dare make a comment / risk an opinion? Francois Grieu - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com -- Markus Rückert TU Darmstadt, Fachbereich Informatik Hochschulstrasse 10, 64289 Darmstadt Looking for a challenge? - http://www.latticechallenge.org - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
A Service to Prove You are Really You
On the Internet, nobody knows you’re a dog, as the New Yorker cartoon famously said. But what if, while you are surfing, you want to prove your pedigree? Equifax, the big credit agency that already knows more about your flea count than you do, wants to help. : http://bits.blogs.nytimes.com/2009/05/19/a-service-to-prove-you-are-really-you/ Saqib http://www.capital-punishment.us - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
On May 11, 2009, at 8:27 PM, silky wrote: The local version needs access to the last committed file (to compare the changes) and the server version only keeps the 'base' file and the 'changes' subsets. a) What's a committed file. b) As in my response to Victor's message, note that you can't keep a base plus changes forever - eventually you need to resend the base. And you'd like to do that efficiently. So yes, it does increase the amount of space required locally (not a lot though, unless you are changing often and not committing), Some files change often. There are files that go back and forth between two states. (Consider a directory file that contains a lock file for a program that runs frequently.) The deltas may be huge, but they all collapse! and will also increase the amount of space required on the server by 50%, but you need pay the cost somewhere, and I think disk space is surely the cheapest cost to pay. A large percentage increase - and why 50%? - scales up with the amount of storage. There are, and will for quite some time continue to be, applications that are limited by the amount of disk one can afford to throw at them. Such an approach drops the maximum size of file the application can deal with by 50%. I'm not sure what cost you think needs to be paid here. Ignoring encryption, an rsync-style algorithm uses little local memory (I think the standard program uses more than it has to because it always works on whole files; it could subdivide them) and transfers close to the minimum you could possibly transfer. If you want the delta computation to be done locally, you need two local copies of the file - doubling your disk requirements. In principle, you could do this only at file close time, so that you'd only need such a copy for files that are currently being written or backed up. What happens if the system crashes after it's updated but before you can back it up? Do you need full data logging? I think this is resolved by saving only the last committed. If a file isn't committed when closed, then you're talking about any commonly-used system. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
I'd use a tweakable mode like EME-star (also EME*) that is designed for something like this. It would also work with 512-byte blocks. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
visualizing modes of operation
http://www.cryptosmith.com/archives/621 --Steve Bellovin, http://www.cs.columbia.edu/~smb - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
[fc-announce] CF Workshop Proposals for FC10. Deadline: June 15, 2009
Begin forwarded message: Resent-From: r...@unipay.nl From: Pino Caballero pcaba...@ull.es Date: May 15, 2009 7:02:54 AM GMT-04:00 Resent-To: fc-annou...@ifca.ai To: pcaba...@ull.es Subject: [fc-announce] CF Workshop Proposals for FC10. Deadline: June 15, 2009 We apologize in advance if you receive multiple copies of this CFP. *** Financial Cryptography and Data Security 2010 Tenerife, Canary Islands, Spain 25-29 January 2010 http://fc10.ifca.ai CALL FOR Workshop Proposals. Proposals for workshops to be held at FC 2010 are solicited. A workshop can be full day or half day in length. Workshop proposals should include: (i) a title, (ii) a call for papers, (iii) a brief summary and justification -- including how it would fit into the greater FC scope, (iv) a (tentative) Program Committee and its Chair, (v) one-paragraph bios for key organizers, and (vi) the expected (or previous - if workshop has been held in previous years) number of submissions, participants and acceptance rates. Workshop proposals should be sent fc10worksh...@ifca.ai https://correoweb.ccti.ull.es/imp/message.php?index=1#. IMPORTANT DATES Workshop Submission: June 15, 2009 Workshop Notification: June 30, 2009 ORGANIZERS General Chair: Pino Caballero-Gil, University of La Laguna Local Chair: Candelaria Hernandez-Goya, University of La Laguna Proceedings Chair: Reza Curtmola, New Jersey Institute of Technology Poster Chair: Peter Williams, Stony Brook University Local Committee: Luisa Arranz Chacon, Alcatel Espana, S.A. Candido Caballero Gil, University of La Laguna Felix Herrera Priano, University of La Laguna Belen Melian Batista, University of La Laguna Jezabel Molina Gil, University of La Laguna Jose Moreno Perez, University of La Laguna Marcos Moreno Vega, University of La Laguna Alberto Peinado Dominguez, University of Malaga Alexis Quesada Arencibia, University of Las Palmas de Gran Canaria Jorge Ramio Aguirre, Polytechnic University of Madrid Victoria Reyes Sanchez, University of La Laguna PROGRAM COMMITTEE Program Chair: Radu Sion, Stony Brook University Ross Anderson, University of Cambridge Lucas Ballard, Google Inc. Adam Barth, UC Berkeley Luc Bouganim, INRIA Rocquencourt Bogdan Carbunar, Motorola Labs Ivan Damgard, Aarhus University Ernesto Damiani, University of Milano George Danezis, Microsoft Research Sabrina de Capitani di Vimercati, University of Milano Rachna Dhamija, Harvard University Sven Dietrich, Stevens Institute of Technology Roger Dingledine, The TOR Project Josep Domingo-Ferrer, University of Rovira i Virgili Stefan Dziembowski, University of Rome La Sapienza Bernhard Esslinger, Siegen University Simone Fischer-Hübner, Karlstad University Amparo Fuster-Sabater, Instituto de Física Aplicada Madrid Philippe Golle, Palo Alto Research Center Dieter Gollmann, Technische Universitaet Hamburg-Harburg Rachel Greenstadt, Drexel University Markus Jakobsson, Palo Alto Research Center and Indiana University Rob Johnson, Stony Brook University Ton Kalker, HP Labs Stefan Katzenbeisser, Technische Universität Darmstadt Angelos Keromytis, Columbia University Lars R. Knudsen, Technical University of Denmark Wenke Lee, Georgia Tech Arjen Lenstra, Ecole Polytechnique Federale de Lausanne (EPFL) and Alcatel-Lucent Bell Laboratories Helger Lipmaa, Cybernetica AS Javier Lopez, University of Malaga Luigi Vincenzo Mancini, University of Rome La Sapienza Refik Molva, Eurecom Sophia Antipolis Fabian Monrose, University of North Carolina at Chapel Hill Steven Murdoch, University of Cambridge David Naccache, Ecole Normale Superieure (ENS) David Pointcheval, Ecole Normale Superieure (ENS) and CNRS Bart Preneel, Katholieke Universiteit Leuven Josep Rifa Coma, Autonomous University of Barcelona Ahmad-Reza Sadeghi, Ruhr-University Bochum Angela Sasse, University College London Vitaly Shmatikov, University of Texas at Austin Miguel Soriano, Polytechnic University of Catalonia Miroslava Sotakova, Aarhus University Angelos Stavrou, George Mason University Patrick Traynor, Georgia Tech Nicholas Weaver, International Computer Science Institute Berkeley The Financial Cryptography and Data Security Conference is organized by The International Financial Cryptography Association (IFCA). ___ fc-announce mailing list fc-annou...@ifca.ai http://mail.ifca.ai/mailman/listinfo/fc-announce - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com