Re: [Cryptography] Sha3
Because not being fast enough means you don't ship. You don't ship, you didn't secure anything. Performance will in fact trump security. This is the empirical reality. There's some budget for performance loss. But we have lots and lots of slow functions. Fast is the game. (Now, whether my theory that we stuck with MD5 over SHA1 because variable field lengths are harder to parse in C -- that's an open question to say the least.) On Tuesday, October 1, 2013, Ray Dillinger wrote: What I don't understand here is why the process of selecting a standard algorithm for cryptographic primitives is so highly focused on speed. We have machines that are fast enough now that while speed isn't a non issue, it is no longer nearly as important as the process is giving it precedence for. Our biggest problem now is security, not speed. I believe that it's a bit silly to aim for a minimum acceptable security achievable within the context of speed while experience shows that each new class of attacks is usually first seen against some limited form of the cipher or found to be effective only if the cipher is not carried out to a longer process. Original message From: John Kelsey crypto@gmail.com javascript:_e({}, 'cvml', 'crypto@gmail.com'); Date: 09/30/2013 17:24 (GMT-08:00) To: cryptography@metzdowd.com javascript:_e({}, 'cvml', 'cryptography@metzdowd.com'); List cryptography@metzdowd.comjavascript:_e({}, 'cvml', 'cryptography@metzdowd.com'); Subject: [Cryptography] Sha3 If you want to understand what's going on wrt SHA3, you might want to look at the nist website, where we have all the slide presentations we have been giving over the last six months detailing our plans. There is a lively discussion going on at the hash forum on the topic. This doesn't make as good a story as the new sha3 being some hell spawn cooked up in a basement at Fort Meade, but it does have the advantage that it has some connection to reality. You might also want to look at what the Keccak designers said about what the capacities should be, to us (they put their slides up) and later to various crypto conferences. Or not. --John ___ The cryptography mailing list cryptography@metzdowd.com javascript:_e({}, 'cvml', 'cryptography@metzdowd.com'); http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
1280-Bit RSA
All, I've got a perfect vs. good question. NIST is pushing RSA-2048. And I think we all agree that's probably a good thing. However, performance on RSA-2048 is too low for a number of real world uses. Assuming RSA-2048 is unavailable, is it worth taking the intermediate step of using RSA-1280? Or should we stick to RSA-1024? --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: [TIME_WARP] 1280-Bit RSA
Dan, I looked at the GNFS runtime and plugged a few numbers in. It seems RSA Security is using a more conservative constant of about 1.8 rather than the suggested 1.92299... See: http://mathworld.wolfram.com/NumberFieldSieve.html So using 1.8, a 1024 bit RSA key is roughly equivalent to a 81 bit symmetric key. Plugging in 1280 yields 89 bits. I'm of the opinion that if you take action to improve security, you should get more than 8 additional bits for your efforts. For example, 1536 shouldn't be that much slower but gives 96 bits of security. Here's the actual data, in terms of transactions per second, I'm getting for a sample app: 512: 710.042382 1024: 187.187719 1280: 108.592265 1536: 73.314751 2048: 20.645645 2048 ain't happening. The relative diff between 1280 and 1536 is interesting though. For posterity, here is a table using 1.8 for the GNFS constant: RSASymmetric 256 43.7 512 59.8 768 71.6 1024 81.2 1280 89.5 1536 96.8 2048 109.4 3072 129.9 4096 146.5 8192 195.1 Do other cracking mechanisms have similar curves to GNFS (with different constants)? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: OpenID/Debian PRNG/DNS Cache poisoning advisory
Eric Rescorla wrote: At Fri, 8 Aug 2008 17:31:15 +0100, Dave Korn wrote: Eric Rescorla wrote on 08 August 2008 16:06: At Fri, 8 Aug 2008 11:50:59 +0100, Ben Laurie wrote: However, since the CRLs will almost certainly not be checked, this means the site will still be vulnerable to attack for the lifetime of the certificate (and perhaps beyond, depending on user behaviour). Note that shutting down the site DOES NOT prevent the attack. Therefore mitigation falls to other parties. 1. Browsers must check CRLs by default. Isn't this a good argument for blacklisting the keys on the client side? Isn't that exactly what Browsers must check CRLs means in this context anyway? What alternative client-side blacklisting mechanism do you suggest? It's easy to compute all the public keys that will be generated by the broken PRNG. The clients could embed that list and refuse to accept any certificate containing one of them. So, this is distinct from CRLs in that it doesn't require knowing which servers have which cert... Funnily enough I was just working on this -- and found that we'd end up adding a couple megabytes to every browser. #DEFINE NONSTARTER. I am curious about the feasibility of a large bloom filter that fails back to online checking though. This has side effects but perhaps they can be made statistically very unlikely, without blowing out the size of a browser. Updating the filter could then be something we do on a 24 hour autoupdate basis. Doing either this, or doing revocation checking over DNS (seriously), is not necessarily a bad idea. We need to do better than we've been. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Toshiba shows 2Mbps hardware RNG
Peter Gutmann wrote: David G. Koontz [EMAIL PROTECTED] writes: Military silicon already has RNG on chip (e.g. AIM, Advanced INFOSEC Machine, Motorola), That's only a part of it. Military silicon has a hardware RNG on chip alongside a range of other things because they know full well that you can't trust only a hardware/noise-based RNG, there are too many variables and too many things that can go wrong with that single source. That's why I was sceptical of the we've solved the RNG problem with our custom hardware claim, they've created one possible source of input but not a universal solution. Peter. Peter, you've just hit on something that's genuinely confused me for quite some time. Combining hash functions has always seemed naive -- the problem with chaining two different functions is that it creates a midpoint; you can collide half the bitspace independently of the other half. Better to just thoroughly mix them both. But shouldn't it be an improvement to XOR a theoretically correct RNG with a well seeded PRNG, based on the theory that: 1) Either generator could be safely XOR'd against a repeated series of 0x41's, and the output would still be just as random 2) The flaws of a subtlety broken RNG would be difficult to exploit through the noise of a sufficiently validated cryptographic function, and vice versa For example, the following construction: Start with an RNG. Retrieve 64K of random data. Assume there might be a bias somewhere in there, but that at least 256 bits are good. SHA-256 the data. AES-256 encrypt the data with the result from the SHA-256. XOR the random data against its encrypted self. Return 64K of PNRG-hardened RNG data. Aside from the obvious rejoinder to maybe XOR *another* batch of entropy against the previous batch's encrypted self (a change that halves performance), I can't see much wrong. I rather deeply doubt I'm the first to come up with a suggestion like that either. So, uh, why do weak RNG's keep showing up? Is there something fundamentally breakable in the above design? --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
(as if anyone uses client certificates anyway)? Guess why so few people are using it ... If it were secure, more people would be able to use it. People don't use it because the workload of getting signed up is vastly beyond their skillset, and the user experience using the things is pretty bad too. And there are hundreds of internal systems I heard of that are using client certificates in reality every day. There's always a few people using a technology. It's certainly a nonplayer out there. Probably more servers out there authing with Digest, honestly. Validated email addresses for spamming. Spear-phishing perhaps, ... There are CA´s on this planet that put things like social security numbers into certificates. Who? Seriously, that's pretty significant, I'd like to know who does this. Where does the SSL specification say that certificates shouldn´t contain sensitive information? I am missing that detail in the security consideration section of the RFC. The word public in Public Key isn't exactly subtle. Do we have any more ideas how we can get this flaw fixed before it starts hurting too much? Make it really easy to use some future version of SSL client certs, and quietly add the property you seek. Ease of use drives technology adoption; making the tech actually work is astonishingly secondary. Heh, you asked :) We have an issue here. And the issue isn´t going to go away, until we deprecate SSL/TLS, or it gets solved. To be clear, we'd *have* an issue, if any serious number of people used SSL client certs. I think you have a point that if SSL client certs become very popular going forward, then every website you go to will quietly grab your identity through their ad banners. * We fix SSL Does anyone have a solution for SSL/TLS available that we could propose on the TLS list? If not: Can anyone with enough protocol design experience please develop it? What solution could there be? You're actually going to SSL to the banner ad network, and you're going to give them your client cert. * We deprecate SSL for client certificate authentication. We write in the RFC that people MUST NOT use SSL for client authentication. (Perhaps we get away by pretending that client certificates accidently slipped into the specification.) People by in large do not use SSL client cert authentication. This is problematic, as there's some very nice cryptographic aspects of the system. * We switch from TLS to hmmm ... perhaps SSH, which has fixed the problem already. Hmm, there we would have to write all the glue RFCs like HTTP over SSH again ... I used to code for SSH. SSL is an entire top-to-bottom stack, replete with a deep PKI infrastructure. SSH? Tunneling transport, barely even librarized. Try to send a DVD iso image (4GB) over a SSL or SSH encrypted link with bit errors every 1 bits with a client software like scp that cannot resume downloads. I gave up after 5 tries that all broke down in average after 1 GB. (In that case it was a hardware (bad cable) initiated denial of service attack ;-) The problem here isn't checksums. SSH is notoriously buggy when packets are dropped. I think there are certain windows in which OpenSSH assumes it will get a response. If it doesn't, it just dies. So, outages more than a few hundred milliseconds have a small percentage chance of causing the session to permanently stall. Corrupted MAC on input -- this is a decent sign of corruption at the app layer. Did you really try this with OpenSSL? I've had much better luck there. If the link layer gives you 1/256, and the TCP layer gives you 1/65536, and the SSL layer demands 0/16777216, then end up with 1/16777216 too much. Actually, 256*65536 = 1677216 :) In actuality, you have both IP and TCP checksums. So you get 8 bits from link, 16 bits from IP, and 16 bits from TCP. A random corrupt packet has about 2^40 odds of getting through. Of course, one real problem is that the checksum algorithms don't exactly distribute noise randomly, and noise is not random. Still, corruption doesn't start being a problem until you get some pretty serious amounts of transfer. (Interestingly, I've been looking at IPsec lately, not for encryption, but for better checksumming.) Best regards, Philipp Gühring - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Death of antivirus software imminent
Crypto solves certain problems very well. Against others, it's worse than useless -- worse, because it blocks out friendly IDSs as well as hostile parties. Yawn. IDS is dead, has been for a while now. The bottom line discovery has been that: 1) Anomaly detection doesn't work because anomalies are normal, and 2) Unless you're scrubbing up and down the application and network stacks, you just have no idea what the host endpoint is parsing. At the point where crypto shows up, it's already too late. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: MD5 Collision, Visualised
Ben Laurie wrote: I wrote some code to show the internal state of MD5 during a collision... http://www.shmoo.com/md5-collision.html Cheers, Ben. Ben-- http://www.doxpara.com/md5_anim.gif Thpt ;) (That being said -- I do like your output. Very nice.) --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: mother's maiden names...
A quick question to anyone who might be in the banking industry. Why do banks not collect simple biometric information like photographs of their customers yet? Bank Of America put my photo on my ATM card back in '97. They're shipping me a new one right now, so I assume they kept it in the DB. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ID theft -- so what?
This is yet more reason why I propose that you authorize transactions with public keys and not with the use of identity information. The identity information is widely available and passes through too many hands to be considered secret in any way, but a key on a token never will pass through anyone's hands under ordinary circumstances. It's 2005, PKI doesn't work, the horse is dead. The credit-card sized number dispensers under development are likely to be what comes next. Amusingly, your face is an asymmetric authenticator -- easy to recognize, hard to spoof. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Why Blockbuster looks at your ID.
Jerrold Leichter wrote: | Credit card fraud has gone *down* since 1992, and is actually falling: | | 1992: $2.6B | 2003: $882M | 2004: $788M | | We're on the order of 4.7 cents on the $100. | | http://www.businessweek.com/technology/content/jun2005/tc20050621_3238_tc024.htm | The article also mentions that the loss rate for 1992 was 15.7 cents per $100. Something doesn't add up. Combining the dollar values above with the loss rate per $100, I calculate that the total charges handled in 1992 was about $165 billion - which seems a bit low, but reasonable. However, the corresponding calculation for 2004 shows a total charges of about $16 billion, which is clearly nonsense. I don't actually see the $2.6B figure anywhere in the article. Where did it come from? I did the math. 15.7 / 4.7 ~= 3.34. 3.34 * $778M = $2.6B. There's a problem here, but I'll get to it in a sec. Hmm...lets verify the rest of this: 4.7 cents per 100 is 0.047 dollars per 100 dollars is 0.00047 dollars per dollar. x * 0.00047 = $778M x = $778M / 0.00047 x = 1655319M = 1.65T Looking at Federal Reserve data ( http://www.federalreserve.gov/releases/g19/Current/g19.htm ), there was about $2T in overall consumer credit. I can envision the vast majority, but not all of this being on plastic. So, $1.65T works. If you try to repeat this for 1992, though, you'll find an interesting bug...total transactions in 1992 were also about 1.65T. Gee, it's almost like I assumed credit card usage rates were constant over the 12 year period...oops :) But then there's inflation, which alters dollar figures substantially. So oops in the other direction. The fundamental point stands, though...credit fraud has been managed surprisingly well (though some people have said fraud is understated by ~~200%). --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Why Blockbuster looks at your ID.
I'm think you wrong on that one. Financial cost and benefit are easily assessed on this, and I think the numbers add up. Credit card fraud costs in the hundreds of billions of dollars a year, much of which could be eliminated by a change to the sort of system I mention. That's not a small amount of money. Indeed, it is more than enough incentive for a major change. Credit card fraud has gone *down* since 1992, and is actually falling: 1992: $2.6B 2003: $882M 2004: $788M We're on the order of 4.7 cents on the $100. http://www.businessweek.com/technology/content/jun2005/tc20050621_3238_tc024.htm If it's any consolation, I was rather surprised myself. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: /dev/random is probably not
So the funny thing about, say, SHA-1, is if you give it less than 160 bits of data, you end up expanding into 160 bits of data, but if you give it more than 160 bits of data, you end up contracting into 160 bits of data. This works of course for any input data, entropic or not. Hash saturation? Is not every modern hash saturated with as much entropy it can assume came from the input data (i.e. all input bits have a 50% likelihood of changing all output bits)? Incidentally, that's a more than mild assumptoin that it's pure noise coming off the sound card. It's not, necessarily, not even at the high frequencies. Consider for a moment the Sound Blaster Live's E10K chip, internally hard-clocked to 48khz. This chip uses a fairly simple algorithm to upsample or downsample all audio streams to 48,000 samples per second. It's well known that scaling algorithms exhibit noticable properties -- this fact has been used to detect photoshopped works, for instance. Take a look how noise centered around 15khz gets represented in a 48khz averaged domain. Would your system detect this fault? Of course not. No extant system can yet detect the difference between a quantum entropy generator and an AES or 3DES stream. (RC4's another story.) You can't externally calculate entropy levels; you can only assume. --Dan John Denker wrote: On 07/01/05 13:08, Charles M. Hannum wrote: Most implementations of /dev/random (or so-called entropy gathering daemons) rely on disk I/O timings as a primary source of randomness. ... I believe it is readily apparent that such exploits could be written. So don't do it that way. Vastly better methods are available: http://www.av8n.com/turbid/ ABSTRACT: We discuss the principles of a High-Entropy Symbol Generator (also called a Random Number Generator) that is suitable for a wide range of applications, including cryptography and other high-stakes adversarial applications. It harvests entropy from physical processes, and uses that entropy efficiently. The hash saturation principle is used to distill the data, resulting in virtually 100% entropy density. This is calculated, *not* statistically estimated, and is provably correct under mild assumptions. In contrast to a Pseudo-Random Number Generator, it has no internal state to worry about, and does not depend on unprovable assumptions about ``one-way functions''. We also describe a low-cost high-performance implementation, using the computer's audio I/O system. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: WYTM - but what if it was true?
If you are insisting that there is always a way and that, therefore, the situation is permanently hopeless such that the smart ones are getting the hell out of the Internet, I can go with that, but then we (you and I) would both be guilty of letting the best be the enemy of the good. A reasonable critique. It is not necessary though that there exists an acceptable solution that keeps PC's with persistent stores secure. A bootable CD from a bank is an unexpectedly compelling option, as are the sort of services we're going to see coming out of all those new net-connected gaming systems coming out soon. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: WYTM - but what if it was true?
Dan-- I had something much more complicated, but it comes down to. You trust Internet Explorer. Spyware considers Internet Explorer crunchy, and good with ketchup. Any questions? A little less snarkily, Spyware can trivially use what MS refers to as a Browser Helper Object (BHO) to alter all traffic on any web page. Inserting a 1x1 iframe in the corner of whatever, that does nothing but transmit upstream data via HTTP image GETs, is trivial. And if HTTP is a bit too protected -- there's *always* DNS ;). gethostbyname indeed. --Dan P.S. Imagine for a moment it was profitable to give people cancer. No, not just a pesky side effect, but kind of the idea. Angiostatin wouldn't stand a chance. [EMAIL PROTECTED] wrote: What do you tell people to do? commercial_message Defense in depth, as always. As an officer at Verdasys, data-offload is something we block by simply installing rules like Only these two trusted applications can initiate outbound HTTP where the word trusted means checksummed and the choice of HTTP represents the most common mechanism for spyware, say, to do the offload of purloined information. Put differently, if there 5,000 diseases but only two symptoms, then symptomatic relief is the more cost-effective approach rather than cure. In this case, why do I care if I have spyware if it can't talk to its distant master? (Why do I care if I have a tumor if angiostatin keeps it forever smaller than 1mm in diameter?) Of course, there are details, and, of course, I am willing to discuss them at far greater length. /commercial_message --dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Optimisation Considered Harmful
Suppose you have something that is inadvertently an oracle - it encrypts stuff from many different users preparatory to sending it out over the internet, and makes no effort to strongly authenticate a user. Have it encrypt stuff into a buffer, and on a timer event, send out the buffer. Your code is now of course multithreaded - very easy to get multithreading bugs that never show up during testing, but non deterministically show up in actual use. The problem is with edges: Suppose the timer goes off every 10ms. You have an operation that takes either 5ms or 15ms, depending on whether a chosen bit of the key is 1 or 0. Whether or not a given time slot is occupied with results will emit whether the bit was 1 or 0. Now, suppose your timer goes off every 200ms. No problem, right? At time=190ms, you force an encryption. If it's done by the time=200ms deadline, you know. Things get trickier when there's random noise in the timer, and it matters whether the distribution of 1's and 0's is equal or not. But this is fundamentally a difficult problem to handle. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: encrypted tapes (was Re: Papers about Algorithm hiding ?)
2) The cost in question is so small as to be unmeasurable. Yes, because key management is easy or free. Also, reliability of encrypted backups is problematic: CBC modes render a single fault destructive to the entire dataset. Counter mode is sufficiently new that it's not supported by existing code. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: [Clips] Citigroup Says Data Lost On 3.9 Million Customers
The likelihood of having the information compromised is very remote given the type of equipment that is required to read it, Debby Hopkins, Citigroup's chief operations and technology officer, said in an interview. Additionally, the information is not in a format that an untrained eye would even know what to look for. The inability to procure hardware or understanding in the age of eBay and Google is simply not a credible defense. Encrypt in transit or face the consequences. Free advertising for your credit monitoring service does not qualify. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How secure is the ATA encrypted disk?
From what I've heard, datapath to the disk. I've read enough of the specs to see they're well aware a worm could brick a couple hundred thousand hard drives. --Dan James A. Donald wrote: Every ATA disk contains encryption firmware, though not all bioses allow you to use it. There is a master and a user password, 32 bytes each. If you set them both to the same value, and that value is a strong 32 byte password, then the disk can only be booted or accessed by entering that password. This disk firmware is what password protected laptops use. It exists on most PCs, though most of them have no bios firmware to use it. How strong is this standard - could someone bypass it by taking a soldering iron to the disk? Is the disk encrypted, or just the datapath to the disk? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Secure Science issues preview of their upcoming block cipher
Have you looked at their scheme? http://www.securescience.net/ciphers/csc2/ The way to come up with a cipher provably as secure as AES-128 is to use AES-128 as part of your cipher -- but their scheme does not do anything like that. I am very skeptical about claims that they have a mathematical proof that CS2-128 is as secure as AES-128. I want to see the proof. Backstory: Secure Science is basically publishing a cipher suite implemented by Tom St. Denis, author of Libtomcrypt. Though not the most ... diplomatic of characters haunting sci.crypt, the guy's quite bright, is an absurdly prolific author (has quite literally written several hundred page books documenting use of Libtomcrypt and mechanisms for multiprecision math), and can be expected to generate cool things in the years to come. As for the manner of this cipher's publication...Tom actually did release the paper some time ago. See eprint @ http://eprint.iacr.org/2004/085 . Lance has Tom on staff, and...well, sort of blew the announce. He understands rather well the error of his ways, and is in all sorts of damage control. So, quick summary -- yes, that's a very cranky way to announce a cipher, no, it's not a crank cipher. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: how to phase in new hash algorithms?
Steven M. Bellovin wrote: We all understand the need to move to better hash algorithms than SHA1. At a minimum, people should be switching to SHA256/384/512; arguably, Whirlpool is the right way to go. The problem is how to get there from here. I've been rather continually pinging people, asking them for an explanation as to the design decisions of Whirlpool (namely -- it's similar but noticably not identical to AES/Rijndael, and isn't just a straightforward expansion of the block size up to 512 bits). I'm not saying anything bad about Whirlpool, but I get alot of people approaching me about the hash and I don't really know what to tell them. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: What is to be said about pre-image resistance?
Ian, The Wang attack does nothing (yet) for second preimages. The best attack I know of against them refers is in Kelsey and Schneier's *Second Preimages on n-bit Hash Functions for Much Less than 2^n Work.* It's at: http://eprint.iacr.org/2004/304 Once you cut through the verbiage, it's really pretty simple: The bigger a file, the more intermediate hashes there are inside of it. The more intermediate hashes, the more points there are to collide against. So, a 700MB CD image vs. a hash with a 512 bit blocksize will have 734,003,200 bytes / 64 bytes = 11,468,800 intermediate hashes that may be collided against. That's a little more than 2^23. Against MD5, you're looking at 2^105 for a CD; against SHA-1, you're looking at 2^137. This is of course work that's far outside the range of feasibility (and, in a small ahem to the paper, 2^60 byte messages are equally ridiculous). You may say this isn't a true second preimage attack, because you only acquire an intermediate collision. But all intermediate collisions can be appended to with legitimate data until they return to the correct final hash state. So you generate your malicious data, search for random garbage that, when appended, equals one of the 11M potential states, and then append the rest of the legitimate file from that point. MD Hardening, i.e. the conclusion of a data stream with its own length, *does* create a problem though. We cannot alter the final length of the file, or the hash will fail. So if we find an intermediate collision at an earlier point in the file than our malicious payload requires, we must discard it. For large malicious payloads (say, replacing one CD entirely with another), this eliminates our attack window entirely. Of course, again, 2^105 work is ridiculous. --Dan Ian G wrote: Collision resistance of message digests is effected by the birthday paradox, but that does not effect pre-image resistance. (correct?) So can we suggest that for pre-image resistance, the strength of the SHA-1 algorithm may have been reduced from 160 to 149? Or can we make some statement like reduced by some number of bits that may be related to 11? Or is there no statement we can make? iang PS: There is a nice description (with a bad title) here for the amateurs like myself: http://www.k2crypt.com/sha1.html - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: comments wanted on gbde
Re, GDBE-- Some initial thoughts: I wouldn't be surprised if platters couldn't be analyzed for usage levels / magnetic degradation (Peter?). Even without a clean room, ATA is pretty rich -- anyone remember the guy who graphically plotted the spiral damage caused by a falled drive head w/ nothing but a massively hacked ATA driver? There's also likely to be useful information from drive sectors duplicated by the drive firmware (there's extra space in every drive; when particular sectors are judged buggy content from them is migrated onto the spare space). I saw nothing establishing the integrity of sectors during *decryption* in 7.5. Random / polluted sectors will decrypt, though into unpredictable noise (which tends to do bad things to file system code). Previous versions of sectors will also decrypt successfully -- the cleaning lady can take lessons from Mallory, as it were. It's useful to immediately grant though that their threat model is much more aligned towards drives that will never be hot again. One wonders if there is a delivery service for Key-key's. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Digital Water Marks Thieves
My complaint is against the parroting of patently absurd claims by manufacturers (or governments, for that matter) under the guide of journalism. If you need the reason to be concrete, here's one: I might buy this magic water and apply it to some of my stuff, figuring I don't have to shell out for a second pint because Robert Andrews has assured me the thieves can't determine that it's on my Thing-1 but not my Thing-2. There are tens of thousands of places inside a vehicle that a VIN# can be stashed. Sometimes you don't always want the attacker to know where the marks are. The point is that the thief should think anything expensive is protected, by which I mean it's too traceable to fence. At least right now, this is working. Hard to argue with success. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: MD5 collision in X509 certificates
Ben, Semantic gap, and I do apologize if I didn't make this clear. Wang adapts to any initial state, so you can create arbitrary content to prepend your collision set with, adapt to its output, and then append whatever you like. The temporal ordering is indeed important though; you can't create the doppelganger set before you know what's prepended to it. The fact that we can have arbitrary content adapted to allows for a critical expansion of the applied risks (i.e. we wouldn't be seeing colliding certs w/o it). I don't think it's fair to say my attacks -- in some vague, general sense -- are wrong, given what was at best a small difference in interpretation. The x.509 cert collision is a necessary consequence of the earlier discussed prime/not-prime collision. Take the previous concept, make both prime, and surround with the frame of an x.509 cert, and you get the new paper. Still nice to see...Rescorla specifically thought it wasn't possible. I look forward to actually having the code to work on this myself. --Dan Ben Laurie wrote: Cute. I expect we'll see more of this kind of thing. http://eprint.iacr.org/2005/067 Executive summary: calculate chaining values (called IV in the paper) of first part of the CERT, find a colliding block for those chaining values, generate an RSA key that has the collision as the first part of its public key, profit. BTW, reading this made me notice that Dan Kaminsky's attacks are wrong in detail, if not in essence. Because the output of the MD5 block function depends on the chaining values from previous blocks, it is not the case that you can prepend arbitrary material to your colliding block, as he claims. However, you can (according to the paper above) generate collisions with any IV, so if you know what the prepended material is, then Kaminsky's attack will still work. Cheers, Ben. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: MD5 collision in X509 certificates
Ben Laurie wrote: Dan Kaminsky wrote: The x.509 cert collision is a necessary consequence of the earlier discussed prime/not-prime collision. Take the previous concept, make both prime, and surround with the frame of an x.509 cert, and you get the new paper. Actually, not - an RSA public key is not prime. Generating colliding public keys takes quite a bit more work. *laughs* Yes, I suppose it would be difficult for pq to be prime now wouldn't it :) So they've basically solved: md5(pq) == md5(p'q') For integer values of p, q, p' and q'. You are right, this is much more work. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Digital Water Marks Thieves
Matt Crawford wrote: On Feb 15, 2005, at 12:40, R.A. Hettinga wrote: Instant, is a property-marking fluid that, when brushed on items like office equipment or motorcycles, tags them with millions of tiny fragments, each etched with a unique SIN (SmartWater identification number) that is registered with the owner's details on a national police database and is invisible until illuminated by police officers using ultraviolet light. That's amazing! How do the tiny particles know that it's not a civilian illuminating them with ultraviolet light? And how does Wired reporter Robert Andrews fail to ask that question? Why would it matter? We leave fingerprints on everything we touch, but generally only LEO's have access to the fingerprint DB's that can route back to identity. I don't really understand the complaints here. Is there something wrong with luggage tags? How 'bout writing your name in the corner of a textbook? Degenericizing property is sort of an inherent part of owning it; note for instance that few homes are purchased fully furnished. Really the only concern I have is the effects on the Right of First Sale, i.e. the ability as a purchaser to sell what you bought to someone else. Since I bought it second hand and I stole it and sprayed my own tagents on it are similar across so many dimensions, I can imagine EBay UK eventually having to deal with this head on. --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: SHA-1 cracked
It is worth emphasizing that, as a 2^69 attack, we're not going to be getting test vectors out of Wang. After all, if she had 2^69 computation available, she wouldn't have needed to attack MD5; she could have just brute forced it in 2^64. This means the various attacks in the MD5 Someday paper aren't going to cross over to SHA-1, i.e. don't expect these anytime soon for SHA-1. http://www.doxpara.com/t1.html http://www.doxpara.com/t2.html --Dan Steven M. Bellovin wrote: According to Bruce Schneier's blog (http://www.schneier.com/blog/archives/2005/02/sha1_broken.html), a team has found collisions in full SHA-1. It's probably not a practical threat today, since it takes 2^69 operations to do it and we haven't heard claims that NSA et al. have built massively parallel hash function collision finders, but it's an impressive achievement nevertheless -- especially since it comes just a week after NIST stated that there were no successful attacks on SHA-1. --Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Desire safety on Net? (n) code has the solution
Digital certificates can be explained as digital passports, which help in authentication of the bearer on the Internet. This also helps maintain, privacy and integrity of Net-based transactions. Digital signatures are accorded the same value as paper-based signatures of the physical world by the Indian IT Act 2000. Each of these functions help bring trust in Net-based transactions. This passed by without too many people noticing: http://www.cfo.com/article.cfm/3597911/c_3597966?f=home_todayinfinance === The SEC also asserts that the company's 10-Q bore an unauthorized electronic signature of Guccione -- who was Penthouse's principal executive officer and principal financial officer at the time. The signature indicated that Guccione had reviewed and signed the filing and the accompanying Sarbanes-Oxley certification. This representation was false, the SEC stated in its complaint. === You got your SOX in my Digital Signature Repudiation! You got your Digital Signature Repudiation in my SOX! Someone order a failed porn empire? --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Simson Garfinkel analyses Skype - Open Society Institute
Actually it's not that bad: using SIP, the RTP packets can be protected by SRTP (RFC3711, with an opensource implementation from Cisco at http://srtp.sourceforge.net/ ) SRTP...heh. Take a look at RFC3711 for a second. Specification of a key management protocol for SRTP is out of scope here. Section 8.2, however, provides guidance on the parameters that need to be defined for the default and mandatory transforms. VOIP KEX. *shudders* Voice is...unique. Session redirection is a first class function, as is active proxying, up to and including proxies that are payload-destructive (conference stream mixing). KEX in such an environment is a really painful problem, compared to the relatively solvable one of specifying a loss-tolerant encryption protocol. So, they only solved the latter, and figured something would come along for the former. Didn't really happen. (Full Disclosure: I work for Avaya, whose had a proprietary KEX implementation that handles all of this for the last few years. So it's not an unsolvable problem or anything like that. It's just really annoyingly hard.) --Dan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Dell to Add Security Chip to PCs
Uh, you *really* have no idea how much the black hat community is looking forward to TCPA. For example, Office is going to have core components running inside a protected environment totally immune to antivirus. Since these components are going to be managing cryptographic operations, the well defined API exposed from within the sandbox will have arbitrary content going in, and opaque content coming out. Malware goes in (there's not a executable environment created that can't be exploited), sets up shop, has no need to be stealthy due to the complete blockage of AV monitors and cleaners, and does what it wants to the plaintext and ciphertext (alters content, changes keys) before emitting it back out the opaque outbound interface. So, no FUD, you lose :) --Dan Erwann ABALEA wrote: On Wed, 2 Feb 2005, Trei, Peter wrote: Seeing as it comes out of the TCG, this is almost certainly the enabling hardware for Palladium/NGSCB. Its a part of your computer which you may not have full control over. Please stop relaying FUD. You have full control over your PC, even if this one is equiped with a TCPA chip. See the TCPA chip as a hardware security module integrated into your PC. An API exists to use it, and one if the functions of this API is 'take ownership', which has the effect of erasing it and regenerating new internal keys. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]