Re: [Cryptography] Key stretching
On 10/11/13 7:34 PM, Peter Gutmann wrote: Phillip Hallam-Baker hal...@gmail.com writes: Quick question, anyone got a good scheme for key stretching? http://lmgtfy.com/?q=hkdfl=1 Yeah, that's a weaker simplification of the method I've always advocated, stopping the hash function before the final MD-strengthing and repeating the input, only doing the MD-strengthening for the last step for each key. I used this in many of my specifications. In essence, the MD-strengthening counter is the same as the 0xnn counter they used, although longer and stronger. This assures there are no releated key attacks, as the internal chaining variables aren't exposed. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305
On 9/11/13 6:00 AM, Alexandre Anzala-Yamajako wrote: Chacha20 being a stream cipher, the only requirement we have on the ICV is that it doesn't repeat isn't ? You mean IV, the Initialization Vector. ICV is the Integrity Check Value, usually 32-64 bits appended to the packet. Each is separately keyed. This means that if there's a problem with setting 'mostly zeroed out' ICV for Chacha20 we shouldn't use it at all period. I strongly disagree. In my network protocol security designs, I always try to think about weaknesses in the implementation and potential future attacks on the algorithm -- and try to strengthen the security margin. For example, IP-MAC fills every available zero space with randomness, while H-MAC (defined more than a year later) uses constants instead. IP-MAC was proven stronger than H-MAC. Sadly, in the usual standards committee-itis, newer is often assumed to be improved and better. So H-MAC was adopted instead. Of course, we know that H-MAC was chosen by an NSA mole in the IETF, so I don't trust it. Also, there's a certain silliness in formal cryptology that assumes we shouldn't have longer randomness keying than the formal strength of the algorithm. That might have been true in the days of silk and cyanide, where keying was a hard problem, but modern computing can generate lots of longer nonces without much effort. In reality, adding longer nonces may not improve the strength of the algorithm itself, but it improves the margin against attack. A nearly practical attack of order 2**80 could be converted to an impractical attack of order 2**96 As far as your proposition is concerned, the performance penalty seems to largely depend on the target platform. Wouldn't using the same set of operations as Chacha prevent an unexpected performance drop in case of lots of short messages ? I don't understand this part of your message. My ancient CBCS formulation that I'll probably use for PPP (Xor'ing a per-session key with a per-packet unique value) is demonstrably much faster than using ChaCha itself to do that same thing. We've been using stream ciphers and pseudo-stream ciphers (made by chaining MACs or chaining block ciphers) to create per-packet nonces for as long as I can remember (over 20 years). You'll see that in CHAP and Photuris and CBCS. So I'm not arguing with Adam's use of ChaCha for it. It just bugs me that we aren't filling in as much randomness as we could! ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305
On 9/11/13 10:27 AM, Adam Langley wrote: [attempt two, because I bounced off the mailing list the first time.] On Tue, Sep 10, 2013 at 9:35 PM, William Allen Simpson william.allen.simp...@gmail.com wrote: Why generate the ICV key this way, instead of using a longer key blob from TLS and dividing it? Is there a related-key attack? The keying material from the TLS handshake is per-session information. However, for a polynomial MAC, a unique key is needed per-record and must be secret. Thanks, this part I knew, although it would be good explanatory text to add to the draft. I meant a related-key attack against the MAC-key generated by TLS? Thereby causing you to discard it and not key the ICV with it? Using stream cipher output as MAC key material is a trick taken from [1], although it is likely to have more history than that. (As another example, UMAC/VMAC runs AES-CTR with a separate key to generate the per-record keys, as did Poly1305 in its original paper.) Oh sure. We used hashes long ago. Using AES is insane, but then UMAC is -- to be kind -- not very efficient. My old formulation from CBCS was developed during the old IPsec discussions. It's just simpler and faster to xor the per-packet counter with the MAC-key than using the ChaCha cipher itself to generate per-packet key expansion. I was simply wondering about the rationale for doing it yourself. And worrying a little about the extra overhead on back-to-back packets. If AEAD, aren't the ICV and cipher text generated in parallel? So how do you check the ICV first, then decipher? The Poly1305 key (ICV in your terms?) is taken from a prefix of the ChaCha20 stream output. Thus the decryption proceeds as: 1) Generate one block of ChaCha20 keystream and use the first 32 bytes as a Poly1305 key. 2) Feed Poly1305 the additional data and ciphertext, with the length prefixing as described in the draft. 3) Verify that the Poly1305 authenticator matches the value in the received record. If not, the record can be rejected immediately. 4) Run ChaCha20, starting with a counter value of one, to decrypt the ciphertext. ICV = Integrity Check Value at the end of the packet. So ICV-key. Sometimes MAC-key. Anyway, good explanation! Please add it to the draft. An alternative implementation is possible where ChaCha20 is run in one go on a buffer that consists of 64 zeros followed by the ciphertext. The advantage of this is that it may be faster because the ChaCha20 blocks can be pipelined. The disadvantage is that it may need memory copies to setup the input buffer correctly. A moot advantage, in the case of TLS, of the steps that I outlined is that forgeries are rejected faster. Depends on how swamped the processor. I'm a big fan of rejecting forgeries (and replay attacks) before decrypting. Not everybody is Google with unlimited processing power. ;) Needs a bit more implementation details. I assume there's an implementation in the works. (Always helps define things with something concrete.) I currently have Chrome talking to OpenSSL, although the code needs cleanup of course. Excellent ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305
On 9/11/13 10:37 AM, Adam Langley wrote: On Tue, Sep 10, 2013 at 10:59 PM, William Allen Simpson william.allen.simp...@gmail.com wrote: Or you could use 16 bytes, and cover all the input fields There's no reason the counter part has to start at 1. It is the case that most of the bottom row bits will be zero. However, ChaCha20 is assumed to be secure at a 256-bit security level when used as designed, with the bottom row being counters. If ChaCha/Salsa were not secure in this formulation then I think they would have to be abandoned completely. I kinda covered this in a previous message. No, we should design with the expectation that there's something wrong with every cipher (and every implementation), and strengthen it as best we know how. It's the same principle we learned (often the hard way) in school: * Software designers, assume the hardware has intermittent failures. * Hardware designers, assume the software has intermittent failures. Taking 8 bytes from the initial block and using it as the nonce for the plaintext encryption would mean that there would be a ~50% chance of a collision after 2^32 blocks. This issue affects AES-GCM, which is why the sequence number is used here. Sorry, you're correct there -- my mind is often still thinking of DES with its unicity distance of 2**32, so you had to re-key anyway. Using 16 bytes from the initial block as the full bottom row would work, but it still assumes that we're working around a broken cipher and it prohibits implementations which pipeline all the ChaCha blocks, including the initial one. That may be usefully faster, although it's not the implementation path that I've taken so far. OK. I see the pipeline stall. But does poly1305 pipeline anyway? There is an alternative formulation of Salsa/ChaCha that is designed for random nonces, rather than counters: XSalsa/XChaCha. However, since we have a sequence number already in TLS I've not used it. Aha, I hadn't found this (XSalsa, there doesn't seem to be an XChaCha). Good reading, and some of the same points I was trying to make here. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305
It bugs me that so many of the input words are mostly zero. Using the TLS Sequence Number for the nonce is certainly going to be mostly zero bits. And the block counter is almost all zero bits, as you note, (In the case of the TLS, limits on the plaintext size mean that the first counter word will never overflow in practice.) Heck, since the average IP packet length is 43, the average TLS record is likely shorter than that! At least half the connection directions, it's going to be rare that the counter itself exceeds 1! In my PPP ChaCha variant of this that I started several months ago, the nonce input words were replaced with my usual CBCS formulation. That is, invert the lower 32-bits of the sequence number, xor with the upper 32-bits, add (mod 2**64) both with a 64-bit secret IV, count the bits, and variably rotate. This gives more diffusion, at least 2 bits change for every packet, ensure a bit changes in the first 32-bits (highly predictable and vulnerable), and varies the bits affected among 64 positions. Note that I use a secret IV, a cipher key, and an ICV key for CBCS. However, to adapt your current formulation for making your ICV key, ChaCha20 is run with the given key and nonce and with the two counter words set to zero. The first 32 bytes of the 64 byte output are saved to become the one-time key for Poly1305. The remainder of the output is discarded. I suggest: ChaCha20 is run with the given key and sequence number nonce and with the two counter words set to zero. The first 32 bytes of the 64 byte output are saved to become the one-time key for Poly1305. The next 8 bytes of the output are saved to become the per-record input nonce for this ChaCha20 TLS record. Or you could use 16 bytes, and cover all the input fields There's no reason the counter part has to start at 1. Of course, this depends on not having a related-key attack, as mentioned in my previous message. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: Possibly questionable security decisions in DNS root management
Nicolas Williams wrote: Getting DNSSEC deployed with sufficiently large KSKs should be priority #1. I agree. Let's get something deployed, as that will lead to testing. If 90 days for the 1024-bit ZSKs is too long, that can always be reduced, or the ZSK keylength be increased -- we too can squeeze factors of 10 from various places. In the early days of DNSSEC deployment the opportunities for causing damage by breaking a ZSK will be relatively meager. We have time to get this right; this issue does not strike me as urgent. One of the things that bother me with the latest presentation is that only dummy keys will be used. That makes no sense to me! We'll have folks that get used to hitting the Ignore key on their browsers http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf Thus, I'm not sure we have time to get this right. We need good keys, so that user processes can be tested. OTOH, will we be able to detect breaks? A clever attacker will use breaks in very subtle ways. A ZSK break would be bad, but something that could be dealt with, *if* we knew it'd happened. The potential difficulty of detecting attacks is probably the best reason for seeking stronger keys well ahead of time. Agreed. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: CPRNGs are still an issue.
Perry E. Metzger wrote: [Snip admirably straightforward threat and requirements analysis] Yes, you can attempt to gather randomness at run time, but there are endless ways to screw that up -- can you *really* tell if your random numbers are random enough? -- and in a cheap device with low manufacturing tolerances, can you really rely on how consistent things like clock skew will be? Given the previous discussion on combining entropy, it shouldn't hurt, as long as it's testable during manufacture. Lets contrast that with AES in counter mode with a really good factory installed key. It is trivial to validate that your code works correctly (and do please do that!) It is straightforward to build a device to generate a stream of good AES key at the factory, and you need only make sure that one piece of hardware is working correctly, rather than all the cheap pieces of hardware you're churning out. Ah, here's the rub. I like this testing requirement. The recent FreeBSD Security Advisory was merely a simple failure of initialization -- yet wasn't caught for the longest time, because it wasn't readily testable. One big issue might be that if you can't store the counter across device resets, you will need a new layer of indirection -- the obvious one is to generate a new AES key at boot, perhaps by CBCing the real time clock with the permanent AES key and use the new key in counter mode for that session. As long as the testing procedure validates the key and key+RTC separately. This does necessitate an extra manufacturing step in which the device gets individualized, but you're setting the default password to a per-device string and having that taped to the top of the box anyway, right? If you're not, most of the boxes will be vulnerable anyway and there's no point... Recently, I was pleasantly surprised that the ATT U-verse box had this! Unlike the ATT 2wire boxes we were installing just this summer. If we could only get Linksys, et alia on board - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: AES HDD encryption was XOR
Jerry Leichter wrote: ... accurately states that AES-128 is thought to be secure within the state of current and expected cryptographic knowledge, it propagates the meme of the short key length of only 128 bits. A key length of 128 bits is beyond any conceivable brute force attack - in and of itself the only kind of attack for which key length, as such, has any meaning. But, as always, bigger *must* be better - which just raises costs when it leads people to use AES-256, but all too often opens the door for the many snake-oil super-secure cipher systems using thousands of key bits. Oh, say it ain't so! ;-) In the NBC TV episode of /Chuck/ a couple of weeks ago, the NSA cracked a 512-bit AES cipher on a flash drive trying every possible key. Could be hours, could be days. (Only minutes in TV land.) http://www.nbc.com/Chuck/video/episodes/#vid=838461 (Chuck Versus The Fat Lady, 4th segment, at 26:19) It's no wonder that folks are deluded, pop culture reinforces this. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
[Fwd: [announce] THC releases video and tool to backup/modify ePassports]
We knew it was coming, right? Original Message Subject: [announce] THC releases video and tool to backup/modify ePassports Date: Mon, 29 Sep 2008 10:00:26 + From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] http://freeworld.thc.org/thc-epassport/ 29th September 2008 THC/vonJeek proudly presents an ePassport emulator. This emulator applet allows you to create a backup of your own passport chip(s). A video demonstrating the weakness is available at http://freeworld.thc.org/thc-epassport/ The government plans to use ePassports at Immigration and Border Control. The information is electronically read from the Passport and displayed to a Border Control Officer or used by an automated setup. THC has discovered weaknesses in the system to (by)pass the security checks. The detection of fake passport chips is no longer working. Test setups do not raise alerts when a modified chip is used. This enables an attacker to create a Passport with an altered Picture, Name, DoB, Nationality and other credentials. This manipulated information is displayed without any alarms going off. The exploitation of this loophole is trivial and can be verified using thc-epassport. Regardless how good the intention of the government might have been, the facts are that tested implementations of the ePassports Inspection System are not secure. ePassports give us a false sense of security: We are made to believe that they make use more secure. I'm afraid that's not true: current ePassport implementations don't add security at all. Yours sincerely, vonjeek [at] thc dot org The Hackers Choice http://www.thc.org - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: street prices for digital goods?
Peter Gutmann wrote: David Molnar [EMAIL PROTECTED] writes: Dan Geer's comment about the street price of heroin as a metric for success has me thinking - are people tracking the street prices of digital underground goods over time? I've been (very informally) tracking it for awhile, and for generic data (non- Platinum credit cards, PPal accounts, and so on) it's essentially too cheap to meter, you often have to buy the stuff in blocks (10, 20, 50 at a time) to make it worth the sellers while. I haven't tracked the big-ticket items like PPal accounts with guaranteed minimum balances (rather than just any generic PPal account) because the offerings are too ephemeral, you might get PPal with minimum $5K balance advertised for a few weeks, then Platinum Visa for a few weeks, and then something else again. I'm curious because it would be interesting to look at the street price for a specific online bank's logins before and after the bank makes a change to its security practices. (One not particularly great example of a change: adopting EV certs.) Alternatively, look at the price of some good before and after a prosecution. If this has already been done, my apologies, I'd appreciate the pointer. I'm not aware of anyone having done this, mostly because the data doesn't seem to be available. The phishers don't sell (e.g.) BofA accounts specifically, they sell whatever's available - you get a block of X accounts or cards from various banks, whatever's at hand when you buy. The only way to see whether a measure was effective would be to keep buying blocks over time and see what the mix of banks was, and even then it'd be pretty unscientific because you'd be getting lots from random phishing sources or data thefts which might (coincidentally) be targetting one particular bank and not another. Given the diverse sources for this stuff, it's likely that even the vendors only have a vague idea of what the statistics are. Hi gang, I have a question about all this. There seems to be a disconnect between the approximate prices mentioned here - too cheap to only do small transactions, etc - and what I have seen when looking at various of the sites. Maybe I'm missing something and you could correct my thinking. At http://www.voy.com/211320/ I see figures that appear to be for a single card and I would not call them cheap. This one from the first of the month seems typical: best dumps for sale -- dumpsale, 09:44:39 09/01/08 Mon [1] USA Canada Australia visa classic 10$ visa gold/platinum/bussines/signature 20$ master card 10$ infinite 50$ amex 10$ Europe Asia visa classic 50$ visa gold/platinum/bussines/signature 80$ master card 50$ infinite 120$ ICQ: 430439968 E-mail: [EMAIL PROTECTED] The cheapest price here is $10, I assume this is per card, correct? If that is correct, what I see typically is that the order has to be a minimum of $500 if the money is sent Western Union. This means 50 cards at most. Most of the stuff I've seen is that they validate but do not guarantee the cards and don't give refunds. It would seem to me that one would have to have a fair size infrastructure and capital to make this work as it almost certain that some of the cards will fail. Plus it takes people time to call the issuer and go through the process of changing the mailing address as well attempting to increase the limit line of credit available. This would mean that from the time of purchase of the card it might be a week or more before they know that the new limit has been approved. This ties up capital so one wouldn't think the crooks would do one dump, scam all they can then start the process over again, but rather have a continuous stream working so they have cash flow. So are we really talking mostly about bigger operations than the local operator one sees mentioned in the paper from time to time? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: once more, with feeling.
James A. Donald wrote: Peter Gutmann wrote: Unfortunately I think the only way it (and a pile of other things as well) may get stamped out is through a multi-pronged approach that includes legislation, and specifically properly thought-out requirements I agree. I'm sure this is a world-wide problem, and head-in-the-sand cyber-libertarianism has long prevented better solutions. The market doesn't work for this, as there is a competitive *disadvantage* to providing improved security, and it's hard to quantify safety. Remember automotive seat-belts? Air-bags? Engineers developed them, but the industry wouldn't deploy because the market failed to demand safety. That is, long-term safety would cut into short-term profits. The corporate world actually led the public to believe (through advertising) that they were sufficiently safe without them. Only legislation and regulation resulted in measurably greater safety. M$ has long advertised (falsely) that safety was their concern, and their systems were already safe. We all know how that worked out The average cryptographic expert finds it tricky to set up something that is actually secure. The average bureaucrat could not run a pie stand. Legislation and so forth requires wise and good legislators and administrators, which is unlikely. So, what campaigns are you working on currently to improve this? I've educated dozens of U.S. legislators over the years Indeed, the original funding for my NSFnet work 20 years ago was funded by the Michigan House Fiscal Agency, and my early IETF work was funded by the Levin (Senate) and Carr (House) campaigns. Visualize Obama, McCain, or Sarah Palin setting up your network security. Then realize that whoever they appoint as Czar in charge of network security is likely to be less competent than they are. The problem, as always, is enough folks that are competent in both computer security *and* political action. Cannot say much about McCain/Palin, but the Obama folks have been fairly computer literate from the beginning. Not always as security conscious as I'd like, but some seem to be receptive. Unlike McCain (who needs help to get his email), Obama himself seems from reports to be tech-savvy. We either have to educate more political folks about computer security, or more security folks have to become active in politics. The former is the never-ending long-term problem, while the latter is an effective sort-term solution. At the IETF, we used to have a t-shirt, with 9 layers instead of 7. The top was Political, with you are here next to it. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Quiet in the list...
So I'll ask a question. I saw the following on another list: I stopped using WinPT after is crashed too many times. I am now using Thunderbird with the Enigmail plugin for GPG interface. It works rather flawlessly and I've never looked back. http://pgp.mit.edu:11371/pks/lookup?search=0xBB678C30op=index Yes, I regard the combination of Thunderbird + Enigmail + GPG as the best existing solution for secure email. What does anyone think of of the combo? Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
OpenSSH compromise at Red Hat
I'm a bit surprised no one has mentioned the Red Hat server being hacked and the certificated being compromised on Fedora. http://www.eweek.com/c/a/Security/Red-Hat-Digital-Keys-Violated-By-Intruder/ Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Extended certificate error
Peter Gutmann wrote: Allen [EMAIL PROTECTED] writes: I just got a warning that a certificate had expired and yet the data in it says: [From: Tue Aug 05 17:00:00 PDT 2003, To: Mon Aug 05 16:59:59 PDT 2013] The error message says: The digital signature was generated with a trusted certificate but has expired. What's the expiry date for the CA certificate that signed it, and its CA certificate? What's the clock on your PC set to? And why aren't you just clicking Continue anyway like everyone else does? :-). Hi Peter, I checked the chain - goes directly from http://online.ccsf.edu's certificate to Thwate. All of Thwate's on my list expire 12/31/2020 15:59:59 PM except for the primary root CA which is 7/16/2036 16:59:59 PM, and the Thwate Extended Validation SSL CA which is 11/16/2016 15:59:59 PM. As to my system clock, I was asked off list about this and here is what I said: (I) Connect to time.nist.gov or one of a long list every 24 hours. My clock says 3:00 PDT August 18th and I just double checked by re-syncing: SYNC-ATTEMPTHost: mizbeaver.udel.eduAug-18-2008 15:00:22 SUCCESS39678.916909722239678.9169243634 1.46411985042505E-5! 39678.9169243634 39678.9169097222 - 0.146412 (rounded) is I think quite good enough. :) As to just clicking through, either stupid for not trusting that everything is okay, cautious, or just plain curious why. Take your pick. ;- Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Extended certificate error
Hi Gang, More from the land of CAs. I just got a warning that a certificate had expired and yet the data in it says: [From: Tue Aug 05 17:00:00 PDT 2003, To: Mon Aug 05 16:59:59 PDT 2013] The error message says: The digital signature was generated with a trusted certificate but has expired. I'm running Firefox 3.01, and Java 6 Update 7. The error appears to be with Java as that is the window that pops up. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Is snake oil cryptography trans-fat free?
Yet more that is implausible: http://www.securstar.com/products_drivecrypt.php Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: On the unpredictability of DNS
It seems like enough time has passed to post publicly, as some of these are now common knowledge: Ben Laurie wrote: William Allen Simpson wrote: Keep in mind that the likely unpredictability is about 2**24. In many or most cases, that will be implementation limited to 2**18 or less. Why? Remember, this is the sum of the range of the 16-bit DNS header identifier and the 16-bit UDP port number. The theoretical maximum is less than 2**(16+16), as the ports less than 4096 are reserved. Many or most implementations use only a pool of ports: 2**[9, 10, 12] have been reported. Some implementations (incorrectly) use positive signed integers for the DNS identifier: 2**15. And in this week's Hall of Shame, MacOSX Leopard patch for servers seems to randomize the BIND request port in a small pool. The next UDP packet ports are sequential, so the range to guess is very small, simply by looking at the following UDP packets. Very strange reports coming out! MacOSX total: about 2**18. Worse, Apple didn't fix Leopard clients, didn't patch the stub resolver library (neither did BIND), didn't patch earlier versions such as Panther. Many, many MacOS systems are still vulnerable. And in case you don't think this matters, once upon a time I helped build an ISP entirely with Macs, resistant to most compromises. There are far more Macs used as resolvers than any other flavor of *nix. I don't see why. A perfectly reasonable threat is that the attacker reverse engineers the PRNG (or just checks out the source). It doesn't need to be common to be predictable. I don't understand this comment. When MD5 is used as a PRNG, in this case the upper 32-bits of its 128-bit output cycle, what amount of samples will reveal the seed, or the current internal state of the sequence? When ARC4 is used as PRNG, what amount of samples will reveal the seed or the current state? Are you only referring to reverse engineering trivially poor PRNG? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: On the unpredictability of DNS
I've changed the subject. Some of my own rants are about mathematical cryptographers that are looking for the perfect solution, instead of practical security solution. Always think about the threat first! In this threat environment, the attacker is unlikely to have perfect knowledge of the sequence. Shared resolvers are the most critical vulnerability, but the attacker isn't necessarily in the packet path, and cannot discern more than a few scattered numbers in the sequence. The more sharing (and greater impact), the more sparse the information. In any case, the only perfect solution is DNS-security. Over many years, I've given *many* lectures to local university, network, and commercial institutions about the need to upgrade and secure our zones. But the standards kept changing, and the roots and TLDs were not secured. Now, the lack of collective attention to known security problems has bitten us collectively. Never-the-less, with rephrasing, Ben has some good points Ben Laurie wrote: But just how GREAT is that, really? Well, we don't know. Why? Because there isn't actually a way test for randomness. ... While randomness is sufficient for perfect unpredictability, it isn't necessary in this threat environment. Keep in mind that the likely unpredictability is about 2**24. In many or most cases, that will be implementation limited to 2**18 or less. Your DNS resolver could be using some easily predicted random number generator like, say, a linear congruential one, as is common in the rand() library function, but DNS-OARC would still say it was GREAT. In this threat environment, a better test would be for determination of a possible seed for any of several common PRNG. Or lack of PRNG. How many samples would be needed? That's the mathematical limitation. Is it less than 2**9 (birthday attack on 2**18)? It is an issue because of NAT. If your resolver lives behind NAT (which is probably way more common since this alert, as many people's reactions [mine included] was to stop using their ISP's nameservers and stand up their own to resolve directly for them) and the NAT is doing source port translation (quite likely), then you are relying on the NAT gateway to provide your randomness. But random ports are not the best strategy for NAT. They want to avoid re-using ports too soon, so they tend to use an LRU queue instead. Pretty clearly an LRU queue can be probed and manipulated into predictability. Agreed! All my tests of locally accessible NATs (D-Link and Linksys) show that the sequence is fairly predictable. And no code updates available Incidentally, I'm curious how much this has impacted the DNS infrastructure in terms of traffic - anyone out there got some statistics? Some are coming in on another private security list where I and some others here are vetted, but everything is very preliminary. In addition to the publicized attacks on major ISP infrastructure, there are verified scans and attacks against end user home NATs. Oh, and I should say that number of ports and standard deviation are not a GREAT way to test for randomness. For example, the sequence 1000, 2000, ..., 27000 has 27 ports and a standard deviation of over 7500, which looks pretty GREAT to me. But not very random. Again, the question is not randomness, but unpredictability. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
[Fwd: [ekmi] Public Review of SKSML v1.0]
In as much as good key management is fundamental to cryptography functioning appropriately I think that this is a good time for a peek at the proposed standard to find the holes before we wind up b$%^ing about how badly it was and trying to figure out how to get out from the hole that was dug because of a lack of oversight by the general cryptography community. Best Regards, Allen Original Message Subject: [ekmi] Public Review of SKSML v1.0 Date: Thu, 24 Jul 2008 22:04:49 -0400 From: Mary McRae [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] Organization: OASIS To: [EMAIL PROTECTED], [EMAIL PROTECTED] CC: 'ekmi' [EMAIL PROTECTED] To OASIS members, Public Announce Lists: The OASIS Enterprise Key Management Infrastructure (EKMI) TC has recently approved the following specification as a Committee Draft and approved the package for public review: Symmetric Key Services Markup Language (SKSML) Version 1.0 The public review starts today, 24 July 2008, and ends 23 September 2008. This is an open invitation to comment. We strongly encourage feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of OASIS work. Please feel free to distribute this announcement within your organization and to other appropriate mail lists. More non-normative information about the specification and the technical committee may be found at the public home page of the TC at: http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=ekmi. Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility which can be located via the button marked Send A Comment at the top of that page, or directly at: http://www.oasis-open.org/committees/comments/index.php?wg_abbrev=ekmi. Submitted comments (for this work as well as other works of that TC) are publicly archived and can be viewed at: http://lists.oasis-open.org/archives/ekmi-comment/. All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. The specification document and related files are available here: Editable Source (Authoritative): http://docs.oasis-open.org/ekmi/sksml/v1.0/pr01/SKSML-1.0-Specification.odt PDF: http://docs.oasis-open.org/ekmi/sksml/v1.0/pr01/SKSML-1.0-Specification.pdf HTML: http://docs.oasis-open.org/ekmi/sksml/v1.0/pr01/SKSML-1.0-Specification.html Schema: http://docs.oasis-open.org/ekmi/sksml/v1.0/pr01/schema/ Abstract: This normative specification defines the first (1.0) version of the Symmetric Key Services Markup Language (SKSML), an XML-based messaging protocol, by which applications executing on computing devices may request and receive symmetric key-management services from centralized key-management servers, securely, over networks. Applications using SKSML are expected to either implement the SKSML protocol, or use a software library - called the Symmetric Key Client Library (SKCL) - that implements this protocol. SKSML messages are transported within a SOAP layer, protected by a Web Services Security (WSS) header and can be used over standard HTTP securely. OASIS and the EKMI TC welcome your comments. --- Mary P McRae Manager of TC Administration, OASIS email: [EMAIL PROTECTED] web: www.oasis-open.org - To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Dutch chipmaker sues to silence security researchers
Ali, Saqib wrote: Dutch chipmaker NXP Semiconductors has sued a university in The Netherlands to block publication of research that details security flaws in NXP's Mifare Classic wireless smart cards, which are used in transit and building entry systems around the world. Ah, more 3 monkeys syndrome? If a flaw exists but nobody knows about the details, it no longer exists? If we don't publish the evidence about the Earth being round, then it will stay flat, right? Perhaps NXP merely wants to secure the job continuity of sys admins, compliance, and security people, do you think? Given that those in charge rarely listen in any case, perhaps they are trying to promote stress related health problems in a secret conspiracy with doctors. ;- Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Secure voice?
Interesting tidbit: http://www.epaynews.com/index.cgi?survey=ref=browsef=viewid=121516308313743148197block= Nick Ogden, a Briton who launched one of the world's first e-commerce processors in 1994, has developed a system for voice-signed financial transactions. The Voice Transact platform was developed by Ogden's Voice Commerce Group in partnership with U.S. speech software firm Nuance Communications. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Upper limit?
Is there an upper limit on the number of RSA Public/Private 1024 bit key pairs possible? If so what is the relationship of the number of 1024 bit to the number of 2048 and 4096 bit key pairs? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: The wisdom of the ill informed
Arshad Noor wrote: While programmers or business=people could be ill-informed, Allen, I think the greater danger is that IT auditors do not know enough about cryptography, and consequently pass unsafe business processes and/or software as being secure. This is the reason why we in the OASIS Enterprise Key Management Infrastructure Technical Committee have made educating IT Auditors and providing them guidelines on how to audit symmetric key-management infrastructures, one of the four (4) primary goals of the TC. While the technology is well understood by most people on this forum, until we educate the gate-keepers, we have failed in our jobs to secure IT infrastructure. Yep. It seems like we've had a bit of this conversation recently, haven't we? ;- And it is not just the gatekeepers, but also the users who need education. We know that we will not have enough gatekeepers to watch all users and uses. Given this, the real question is, /Quis custodiet ipsos custodes?/ (Given as either Who will watch the watchers themselves? or Who will guard the guardians? from Juvenal.) Here we have the perfect examples of the conundrum in No Such Agency or the Company, who evade oversight or it is so obfuscated that the watchers at the political level either don't know what is really going on or they are complicit. Funny how something as off the main track of society as cryptography still reflects the identical problems of the greater whole, isn't it? I also argue that badly structured protocol requirements that potentially obfuscate what is going on is a serious issue as well. Then too, there is documentation that does not get down to the bare metal, so to speak, so that those who are not skilled at reading code, and its implications, can understand what is going on. The Romans knew that and mad it law: /Quod non est in actis, non est in mundo./ (What is not in the documents does not exist) All of this requires team thinking so that everyone who is looking at the issues involved, no matter from what direction, creator, auditor or end user, gets it. Allen Arshad Noor StrongAuth, Inc. Allen wrote: Hi gang, All quiet on the cryptography front lately, I see. However, that does not prevent practices that *appear* like protection but are not even as strong as wet toilet paper. I had to order a medical device today and they need a signed authorization for payment by my insurance carrier. No biggie. So they ask how I want it set to me and I said via e-mail. Okay. /Then/ they said it was an encrypted file and I thought, cool. How wrong could I be? Very. The (I hate to use this term for something so pathetic) password for the file is 6 (yes, six) numeric characters! My 6 year old K6-II can crack this in less than one minute as there are only 1.11*10^6 possible. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: The wisdom of the ill informed
Nicolas Williams wrote: On Mon, Jun 30, 2008 at 07:16:17AM -0700, Allen wrote: Given this, the real question is, /Quis custodiet ipsos custodes?/ Putting aside the fact that cryptographers aren't custodians of anything, it's all about social institutions. Well, I wouldn't say they aren't custodians. Perhaps not in the sense that the word is commonly used, but most certainly in the sense custodians of the wisdom used to make the choices. This is exemplified by Bruce Schneier, an acknowledged expert, changing his mind about the way to do security from encrypt everything to monitor everything. Yes, I have simplified his stance, but just to make the point that even experts learn and change over time. There are well-attended conferences, papers published online and in many journals, etcetera. So it's not so difficult for people who don't know anything about security and crypto to eventually figure out who does, in the process also learning who else knows who the experts are. Actually I think it is just about as difficult to tell who is a trustworthy expert in the field of cryptography as it is in any field of science or medicine. Just look at the junk science and medical studies. One retrospective study of 90+ clinical trials found that over 600 potentially important reaction to the drugs occurred but only 39 were reported in the papers. I suspect if we did the same sort of retrospective study for cryptography we would find some similar issues, just, perhaps, not as large because there is not as much money to be made with junk cryptography as junk pharmaceuticals. For example, in the IETF there's an institutional structure that makes finding out who to ask relatively simple. Large corporations tend to have some experts in house, even if they are only expert in finding the real experts. We (society) have new experts joining the field, with very low barriers to entry (financial and political barriers to entry are minimal -- it's all about brain power), and diversity amongst the existing experts. There's no major personal gain to be had, besides fame, and too much diversity and openness for anyone to have a prayer of manipulating the field undetected for too long. I'm curious, how does software get sold for so long that is clearly weak or broken? Detected, yes, but still sold like Windows LANMAN backward compatibility. When it comes to expertise in crypto, Quis custodiet ipsos custodes seems like a relatively simple problem. I'm sure it's much, much more difficult a problem for, say, police departments, financial organizations, intelligence organizations, etc... Well, Nico, this is where I diverge from your view. It is the police departments, financial organizations, intelligence organizations, etc... who deploy the cryptography. Why should they be able to do that any better than they do anything else? I suspect that a weakness in oversight in one area is likely to reflect a weakness in others as well. Not total failure, just not done the best possible. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
The wisdom of the ill informed
Hi gang, All quiet on the cryptography front lately, I see. However, that does not prevent practices that *appear* like protection but are not even as strong as wet toilet paper. I had to order a medical device today and they need a signed authorization for payment by my insurance carrier. No biggie. So they ask how I want it set to me and I said via e-mail. Okay. /Then/ they said it was an encrypted file and I thought, cool. How wrong could I be? Very. The (I hate to use this term for something so pathetic) password for the file is 6 (yes, six) numeric characters! My 6 year old K6-II can crack this in less than one minute as there are only 1.11*10^6 possible. You can lead a horse to water Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: RIM to give in to GAK in India
Victor Duchovni wrote: On Tue, May 27, 2008 at 08:08:11PM +0100, Dave Korn wrote: Well spotted. Yes, I guess that's what Jim Youll was asking. And I should have said seemingly-contradictory. This is, of course, what I meant by marketeering: when someone asks if your service is insecure and interceptable, you don't say Yes, our ordinary service will give you up to the filth at the drop of a hat, you spin it as No, our enterprise service is completely secure [...other details elided...]. But this is not news. It is well known (at least among the Enterprise Remote Computing wonks) that only the Enterprise RIM service provides end-to-end security, while the consumer service does not. There is nothing new here. It is not even marketing spin, without your IT shop hosting your content, it is hosted by providers subject to CALEA, ... The good news about RIM is that it has been one of the few devices that actually provides end-to-end security for Enterprises. This has been a selling point that helped get them a large share of the Enterprise market. There is now a software product that does about the same for VoIP that may be of interest when used with Magicjack (http://www.magicjack.com/1/index.asp). The software is: http://zfoneproject.com/getstarted.html Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: not crypto, but fraud detection + additional
Anne Lynn Wheeler wrote: *Irish Bank Debit Card Skimmers Net €1m* http://www.epaynews.com/index.cgi?survey=ref=browsef=viewid=121179135013743148197block= from above: Most of the withdrawals took place at the end of April and early May 2008. Many of the victims contacted their banks to notify them of the withdrawals, as the banks’ fraud detection systems had failed to spot the suspicious activity. I don't know what the policy is in Ireland, but here in the USA there is no stop loss on debit cards so the banks are not obligated to make good on fraudulent withdrawals. I believe that most have out of fear of bad PR, but you have to fight for it if it is just a few that it happens to. If this happens too much then people might stop using debit cards. I have advised my mother, 87, to not use them as she is getting a little slow on the uptake and might miss something like this if it happened to her. Now to show how screwy the system is, I was shopping the other day and the power went off in the grocery store I was at. They had backup power so they were able to check out people; however, they couldn't use debit cards, except Well, the screwy thing was if you entered the charge at terminal as a credit card, even when it was only a debit card, it would accept it. I checked my bank, and sure enough the charge showed as a POS charge! I think the logic is a little screwy and might be able to be exploited though I'm not sure how at the moment. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Question re Turing test and image recognition
Hi gang, In looking at captchas that have been broken via software it dawned on me that the amount of mental processing involved is actually very little. I'm interested in what the current state of image recognition via software of things like knowing the difference between a monkey and a cat or a child laughing or just happy and the degree of reliability of the differentiation. I've done a bit of looking around and don't find much. Does anyone have knowledge of or a pointer to someone who might know where to look about this? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
From FDE list...
[Moderator's note: lightly edited. Please stick to ASCII and lines under 80 columns if possible. --Perry] A gentleman on the FDE list posted the link: http://biz.yahoo.com/nytimes/080509/1194773259639.html?.v=1 and in it is the following quote: Last month, the Pentagon's Defense Advanced Research Projects Agency began distributing chips with hidden Trojan horse circuitry to military contractors who are participating in the agency's Trusted Integrated Circuits program. The goal is to test forensic techniques for finding hidden electronic trap doors, which can be maddeningly elusive. The agency is not yet ready to announce the results of the test, according to Jan Walker, a spokeswoman for the agency. H Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: more on malicious hardware
Perry E. Metzger wrote: It turns out that the counterfeit chips business is booming: http://www.eetimes.com/rss/showArticle.jhtml?articleID=207401126 In combination with the news about what as few as 1500 extra gates can do, this is especially worrisome. So when do the contests start by adventuresome minds to see how *few* gates are needed to compromise a chip's security much like the self replicating code referenced by Ken Thompson in his paper? Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Cruising the stacks and finding stuff
Hi, I find it odd that the responses all seem to focus on pure brute force when I did mention three other factors that might be in play: a defect in the algorithm much like the attack on MD5 which reduces it to an effective length of about 80 bits, if I recall correctly, and/or a different analytical tool/approach much like differential analysis has had an affect on cryptanalysis as a whole, and a purpose built machine. As to using DES as a measuring stick, it was first cracked in 1997 using a software approach which was state as not being as fast as a hardware solution. In the process of straight brute force Rock Verser came up with a faster method: http://www.cs.cmu.edu/~dkindred/des/rocke-alg.html which uses some of the weaknesses of DES to speed the crack At the end of the original challenge they were trying about 7 billion keys per second when the time that the solution was found in June 1997. Granted this was a whole bunch of low end machines working in parallel. Then there is a table at: http://www.interhack.net/projects/deschall/what.html Then they say, So, while it's infeasible for DESCHALL to crack a 72 bit key, it seems that 64 might be within reach, by adding more machines. (We probably used between 15,000 and 20,000 machines.) Consider that the RC5-32/12/5 (40 bits) key crack took three and a half hours. The distributed computer we put together could do it in about 78 seconds. The RC5-32/12/6 (48 bits) key crack took 13 days. A DESCHALL-sized effort could do it in 5 hours. They have a table that estimates that a $300 million AISC machine could crack DES in 38 seconds back in 1995! In 1998 DESIII did it in less than 24 hours. http://www.distributed.net/des/ The key point is: Despite the immense power of the EFF Deep Crack, distributed.net's thousands of deployed clients still surpassed the EFF hardware by more than a factor of 2 in speed. So Moore's law since then, 9 years 3 months: 111 months, say about 64 times as powerful (actually it is more but let's stick with a strict Moore's Law) and now factor in drop in prices at the same time. If we assume the same factor as Moore's law, and divide the price by 64, let's say 60 for simplicity. So not even counting 1995 to 1999, the machine would take about a half second on a $5 million dollar machine today. Probably both less time and money. Today running the RSA-72 DNETC on a single 2.8G dual core machine that is almost three years old it is getting 13^6/second with a software program, not hardware. Also the largest known group of cryptanalysts is at NSA with a big budget to find weaknesses so I would not assume none will be found, just not made public. Sure, it took 400 years to figure out an answer to Fermat's Last Theorem. But we know more today and have more tools so progress (if we can call it that) is faster now. Given all of this, I'm not sure of the value of arguing 128 bit is good enough when 256 is not all that much harder to implement and with in a couple of years will be just as fast in processing while even now, for the size of files being protected, such as credit card data and such, is small enough that the wait time probably wouldn't be noticed in network latency. I see the argument as much like the way the Titanic was built. The double hull stopped short of the waterline and the breach was above it. Total fluke, but it the double hull had been about 8 feet higher up the side we wouldn't have had so many stories to tell and adventures to watch in awe on the tube. The reality is it was not the technology that failed, but rather human error in not going further to meet the risk than was seen at the time. The bizarre thing is the same basic error was the cause of the Exon Valdez disaster. Not protecting against a well know risk, drunk captains. Funny how almost all tankers have double hulls now. But that still didn't prevent the Busan from spilling 58,000 gallons of bunker oil in the San Francisco Bay. If they hadn't had a double hull, how much would the have spilled? Oh, well, given how risk adverse we tend to be it is odd the choices we make. Best, Allen Leichter, Jerry wrote: | ...How bad is brute force here for AES? Say you have a chip that can do | ten billion test keys a second -- far beyond what we can do now. Say | you have a machine with 10,000 of them in it. That's 10^17 years worth | of machine time, or about 7 million times the lifetime of the universe | so far (about 13x10^9 years). | | Don't believe me? Just get out calc or bc and try | ((2^128/10^14)/(60*60*24*365)) | | I don't think anyone will be brute force cracking AES with 128 bit | keys any time soon, and I doubt they will ever be brute forcing AES | with 256 bit keys unless very new and unanticipated technologies | arise. | | Now, it is entirely possible that someone will come up with a much | smarter attack against AES than brute force. I'm just speaking of how | bad
Re: how to read information from RFID equipped credit cards
Ben Laurie wrote: [snip] And so we end up at the position that we have ended up at so many times before: the GTCYM has to have a decent processor, a keyboard and a screen, and must be portable and secure. One day we'll stop concluding this and actually do something about it. And it can almost certainly be done with the current technology. Arnnei Speiser at http://www.megaas.co.nz/ has a two factor one time password application that runs on a java enabled cellphone. If he can do this I suspect it is but short hop to what you suggest. He has a bank demo that is worth looking at as a potential model. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: RNG for Padding
We had many discussions about this 15 years ago You usually have predictable plaintext. A cipher that isn't strong enough against a chosen/known plaintext attack has too many other protocol problems to worry about mere padding! For IPsec, we originally specified random padding with 1 trailing byte of predictable trailing plaintext (the amount of padding). Together with the (encapsulated) protocol number, that actually made 2 bytes of predictable trailing plaintext. Due to my work in other groups, everything that I've specified afterward uses self-describing-padding. That is, the last byte indicates how much padding (just as before), but each byte of the padding indicates its position in the padding sequence. 0 ::= never used. 1 ::= 1 byte of padding (itself) 1 2 ::= 2 bytes of padding ... The original impetus was hardware manufacturers of in-line cipher devices, that don't usually have a good source of randomness. Also, this provides a modest amount of integrity protection. After decryption, the trailing padding must be the correct sequence. Of course, this should be in addition to integrity protection over the whole packet! Additionally, this avoids a possible covert channel for compromised data, whether by accident (revealing a poor RNG or the current state of the RNG), or trojan process communication. Note that I've said avoids, as varying the amount of padding would give a lower bandwidth channel for the latter. When designing, it's always best to defend in depth. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Another NXP Mifare Classic attack
http://www.dailyprogress.com/servlet/Satellite?pagename=CDP/MGArticle/CDP_BasicArticlec=MGArticlecid=1173354778618path= The article is not real clear about the level of physical dissection actually used, but it does appear that progress is being made on that front as well. Allen [Moderator's note: the article discusses the cryptanalysis of the algorithm used in the ill-fated Dutch transit card, which is also in use in the London Underground's Oystercard -- actually cryptanalysis, not mere brute force attack which was possible before. --Perry] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Death of antivirus software imminent
Alex Alten wrote: [snip] These are trite responses. Of course not. My point is that if the criminals are lazy enough to use a standard security protocol then they can't expect us not to put something in place to decrypt that traffic at will if necessary. [snip] Look, the criminals have to design their security system with severe disadvantages; they don't own the machines they attack/take over so they can't control its software/hardware contents easily, they can't screw around too much with the IP protocol headers or they lose communications with them, and they don't have physical access to the slave/owned machines. And, last I heard, they must obey Kerckhoff's law, despite using prayers to Allah for key exchanges. Given all this, I'm not saying its easy to do, but it should be quite possible to crack open some or all of their encrypted comms and/or trace back to the original source attack machines. However, we do know that criminals are not always lazy. The trite comment often said is that if they used the same level of effort in a legal enterprise they would have done quite well. The other proof that they are not lazy is looking at the evolution of the sophistication of malware like Storm and Nugache. It takes some serious effort to overcome the real handicaps that you point out as well as the ratio of the power and numbers that are hunting to put them out of business to their own numbers. In many ways it is similar to a guerrilla war where many of the advantages are actually held by the tiny band of insurgents, who, greatly outnumbered and out-gunned, can in fact change history. The Swiss know this and train their military based on this. Do not be surprised if the dissidents of all stripes use improvisation based on malware and other tools like onion routing to further their causes and evade suppression. BTW, while I do not think all dissidents are righteous or fighting for righteous causes this does negate the general idea. A hammer is a hammer. Good or evil is independent of the tools, it depends on what one is pounding, nails or heads. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
William Allen Simpson wrote: [snip] The whole point of a notary is to bind a document to a person. That the person submitted two or more different documents at different times is readily observable. After all, the notary has the document(s)! No, the notary does not have the documents *after* they are notarized, nor do they keep copies. Having been a notary I know this personally. When I stopped being a notary all I had to submit to the state was my seal and my record books. If I had to testify about a document I would only be attesting that the person who presented themselves adequately proved, under the prudent businessman's standard, that they were the person that they said they were and that I saw them sign the document in question. That's it. No copies at all. What would anyone have to testify about if a legal battle arose after the notary either died or stopped being a notary? Think for a minute about the burden on a notary if they had to have a copy of every document they notarized. What a juicy target they would make for thieves and industrial spies. No patent paperwork would be safe, no sales contract, no will, or other document. Just think how the safe and burglar alarm companies would thrive. Now ask yourself how much it costs to notarize a document. Would that pay for the copying and storage. I don't know what the current fees are in California but 20 years ago they were limited to $6.00 per person per document and an extra buck for each additional copy done at the same time. My average was about $14.00 per session. My insurance was $50/year. Nowhere near enough to cover my liability if I was to retain a copy of the document. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
Francois Grieu wrote: That's because if Tn is known (including chosen) to some person, then (due to the weakness in MD5 we are talking about), she can generate Dp and Dp' such that S( MD5(Tn || Dp || Cp || Cn) ) = S( MD5(Tn || Dp' || Cp || Cn) ) whatever Cp, Cn and S() are. First of all, the weakness in MD5 (computational feasibility over time) that we are talking about is not (yet) a preimage or second preimage attack. Please don't extrapolate your argument. Second of all, you need to read my messages more carefully. No good canonical format allows random hidden fields or images. Third of all, that's not a weakness of a notary protocol -- it's a trap! The whole point of a notary is to bind a document to a person. That the person submitted two or more different documents at different times is readily observable. After all, the notary has the document(s)! Remember, the notary is not vouching for the validity of the content of the document. A notary only certifies that something was submitted by some person at some time. And that cannot be broken by making multiple submissions, or submissions that themselves have the same hash. That's one reason I'm much more interested in the attack on X.509. If Tn was hashed after Dp rather than before, poof goes security. But since it's not, that's a ridiculous strawman. I was remembering PGP off the top of my head. Fairly certain that Kerberos does, too. Not everybody is naive! And since the timestamp is predictable (within some range, although picoseconds really aren't very predictable), the protocols that I've designed include message identifiers, nonces, and sequence numbers, too. As you may recall, I mentioned that there were other fields He asked for an explanation about how a document is identified, he got one. Don't expect me to redesign an entire notary (or even a timestamp) protocol on a Sunday evening for a mailing list Really, there are fairly secure standards already available. However, the actual topic of this thread is code distribution. In that case, there is no other party certifying the documents. The code packager is also the certifier. There is (as yet) no weakness in the MD4 family (including MD5 and SHA1) that allows this attack by another party. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
Personally, I thought this horse was well drubbed, but the moderator let this message through, so he must think it important to continue James A. Donald wrote: William Allen Simpson wrote: The notary would never sign a hash generated by somebody else. Instead, the notary generates its own document (from its own tuples), and signs its own document, documenting that some other document was submitted by some person before some particular time. And how does it identify this other document? Sorry, obviously I incorrectly assumed that we're talking to somebody skilled in the art Reminding you that several of us have told you that a notary has the document in her possession; and binds the document to a person; and that we have rather a lot of experience in identifying documents (even for simple things like email), such as the PGP digital timestamping service. Assuming, Dp := any electronic document submitted by some person, converted to its canonical form Cp := a electronic certificate irrefutably identifying the other person submitting the document Cn := certificate of the notary Tn := timestamp of the notary S() := signature of the notary S( MD5(Tn || Dp || Cp || Cn) ). Of course, I'm sure the formula could be improved, and there are traditionally fields identifying the algorithms used, etc. -- or something else I've forgotten off the top of my head -- but please argue about the actual topic of this thread, instead of incessant strawmen. The notary is only safe from this flaw in MD5 if you Another statement with no proof. As the original poster admitted, there is not a practical preimage or second preimage attack on MD5 (yet). assume he is not using MD5 for its intended purpose. As to its intended purpose, rather than making one up, I've always relied upon the statement of the designer: ... The MD5 algorithm is intended for digital signature applications, where a large file must be compressed in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
James A. Donald wrote: Not true. Because they are notarizing a signature, not a document, they check my supporting identification, but never read the document being signed. This will be my last posting. You have refused several requests to stick to the original topic at hand. Apparently, you have no actual experience with the legal system, or are from such a different legal jurisdiction that your scenario is somehow related to MD5 hashes of software and code distribution. Because human beings often try to skirt the rules, there's a long history of detailed notarization requirements. How it works here: (1) You prepare the document(s). They are in the form prescribed by law -- for example, Michigan Court Rule (MCR 2.114) SIGNATURES OF ATTORNEYS AND PARTIES; VERIFICATION; EFFECT; SANCTIONS (2) The clerk checks for the prescribed form and content. (3) You sign and date the document(s) before the notary (using a pen supplied by the notary, no disappearing ink allowed). (4) The notary signs and dates their record of your signature, optionally impressing the document(s) with an embossing stamp (making it physically difficult to erase). You have now attested to the content of the documents, and the notary has attested to your signature (not the veracity of the documents). Note that we get both integrity and non-repudiation The only acceptable computer parallel would require you to bring the documents to the notary, using a digital format supplied by the notary, generate the digital signature on the notary's equipment, and then the notary indempotently certify your signature (on the same equipment). In the real world, the emphasis is on binding a document to a person, and vice versa. Any digital system that does not tie the physical person to the virtual document is not equivalent. This is simply not equivalent to a site producing its own software and generating a hash of its own content. There should be no third party involved as a certifier. If they were to generate an MD5 hash of documents prepared by someone else, then the attack described (eight different human readable documents with the same MD5 hash) works. If a notary were to do that, they'd be looking at a fairly severe penalty. By definition, such a notary was compromised. But nothing like the prison sentence that you'd be facing for presenting the false documents to the court. And I'd be pushing the prosecutor for consecutive sentences for all 8 fraudulent documents, with enhancements. Nobody has given any examples of human readable documents that will produce the same hash when re-typed into the system. All those proposed require an invisible component. They are machine readable only. That's why we, as security analysts, don't design or approve such systems. We're not (supposed to be) fooled by parlor tricks. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
Weger, B.M.M. de wrote: See http://www.win.tue.nl/hashclash/TargetCollidingCertificates/ ... Our first chosen-prefix collision attack has complexity of about 2^50, as described in our EuroCrypt 2007 paper. This has been considerably improved since then. In the full paper that is in preparation we'll give details of those improvements. Much more interesting. Looks like the death knell of X.509. Why didn't you say so earlier? (It's a long known design flaw in X.509 that it doesn't provide integrity for all its internal fields.) Where are MD2, MD4, SHA1, and others on this continuum? And based on the comments in the page above, the prefix is quite large! Optimally, shouldn't it be = the internal chaining variables? 512 bits for MDx. So, the attacks need two values for comparison: the complexity versus the length of the chosen prefix. Let me know when you get the chosen prefix down to 64-bits, so I can say I told you so to Bellovin again. I was strongly against adding the random IV field to IPsec - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
William Allen Simpson wrote: [snip] Actually, I deal with notaries regularly. I've always had to physically sign while watched by the notary. They always read the stuff notarized, and my supporting identification, because they are notarizing a signature (not a document). And yes, they always generate the stamp or imprint they sign. To do otherwise would be irresponsible (and illegal). Having been a notary in the State of California (Shocked myself, got 100% on the test!) I can attest that the contents of the document are looked at, but only so that I could record what *type* of document I was notarizing, not the exact textual meaning of the content or whether it might or might not allege something that is untrue. The description of the document in my log book was always relatively short as there was only space for about 20 words. The requirements are simple, see that the document you are notarizing has as many pages it says it does so that the count can't be changed without arousing suspicion, and the the person who is signing the paper is identified by enough documentation that I could be assured, within the limits of my ability to give a superficial, not expert, less than ten minutes perusal of the identification documents presented match the person presenting them to the best of my ability to judge. Best, Allen It always was a good faith certification, not a proof beyond challenge. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
James A. Donald wrote: This attack does not require the certifier to be compromised. You are referring to a different page (that I did not reference). Never-the-less, both attacks require the certifier to be compromised! The attack was to generate a multitude of predictions for the US election, each of which has the same MD5 hash. If the certifier certifies any one of these predictions, the recipient can use the certificate for any one of these predictions. That's a mighty big if -- as in infinite improbability. Therefore, a parlor trick, not cryptography. There are no circumstances in which any reputable certifier will ever certify any of the multitude containing a hidden pdf image, especially where generated by another party. The attack requires the certifier to be compromised, either to certify documents that the certifier did not generate, or to include the chosen text (hidden image) in its documents in exactly the correct location. While there are plenty of chosen text attacks in cryptography, this one is highly impractical. The image is hidden. It will not appear, and thus would not be accidentally copied by somebody (cut-and-paste). The parlor trick demonstrates a weakness of the pdf format, not MD5. This attack renders MD5 entirely worthless for any use other than as an error check like CRC - and CRC does it better and faster. To be as weak as CRC, the strength would be 2**8. I've seen no papers that reduce MD5 complexity to 2**8. Please present your proofs and actual vulnerabilities, including specific examples of actual PPP CHAP compromised traffic -- and for extra credit, actual compromise of netbsd and/or openbsd software distribution. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
Weger, B.M.M. de wrote: The parlor trick demonstrates a weakness of the pdf format, not MD5. I disagree. We could just as easy have put the collision blocks in visible images. Parlor trick. ... We could just as easy have used MS Word documents, or any document format in which there is some way of putting a few random blocks somewhere nicely. Parlor trick. ... We say so on the website. We did show this hiding of collisions for other data formats, such as X.509 certificates More interesting. Where on your web site? I've long abhorred the X.509 format, and was a supporter of a more clean alternative. ... and for Win32 executables. Parlor trick. So far, all the things you mention require the certifier to be suborned. Our real work is chosen-prefix collisions combined with multi-collisions. This is crypto, it has not been done before, Certainly it was done before! We talked about it more than a decade ago. We knew that what was computationally infeasible would become feasible. Every protocol I've designed or formally reviewed is protected against the chosen prefix attack. (To qualify, where I had final say. I've reviewed badly designed protocols, such as IKE/ISAKMP. And I've been overruled by committee from time to time) What *would* be crypto is the quantification of where MDx currently falls on the computational spectrum. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Hushmail in U.S. v. Tyler Stumbo
StealthMonger wrote: [snip] The larger truth is that a consequence of using Hushmail is that record of when, with whom, and the size of each communication is available to Hush, even though the content is concealed. So the obvious point is that Hushmail, and systems like it, become concentrators and possible single points of failure. If, on the other hand, you handled your own PKI to send symmetrical keys to your correspondents and managed the keys with something like StrongKey, then one could use a vast number of ISPs/SMTP points so that they may never get a clear path of send and reply through a single ISP. As Jon Callas said, If the system is strong, it all comes down to your operational security. Security is not a thing, it is a process that uses tools and procedures to accomplish the goal. As I like to say, Security is lot like democracy - everyone's for it but few understand that you have to work at it constantly. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Linus: Security is people wanking around with their opinions
I often say, Rub a pair of cryptographers together, and you'll get three opinions. Ask three, you'll get six opinions. :-) However, he's talking about security, which often isn't quantifiable! And don't get me ranting about provable security Had a small disagreement with somebody at Google the other week, as he complained that variable moduli ruined the security proof (attempts) for SSH. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: flavors of reptile lubricant, was Another Snake Oil Candidate
[EMAIL PROTECTED] wrote: The below USB drive manufacture claims FIPS 140-2 certification. Encryption is now required for USB thumb drives used on DoD computer. This one is being used by the Military. http://www.kanguru.com/kanguruusbflash.html See: http://csrc.nist.gov/cryptval/140-1/1401val2006.htm#682 Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
In all the talk of super computers there is not...
There does not seem to be much consideration about what is computationally infeasible, even with rainbow tables. If I remember correctly an 8 character 94 key space table is about 300 MB. How big would it be if it was covering 12 characters? How long would it take to compute assuming 1,000 3 GHz CPUs on a bot net? Now take the phrase Mary had a lamb, and its fleece was as white as snow. Not counting the quotes it is 52 characters and has both upper and lower case characters, spaces and two specials or a total of 55 key space. How big would the rainbow table be to contain that? How long would it take to compute with 1,000 3 GHz CPUs? Of course one could not assume that the pass phrase would only have the 55 above, so what about the 94 key space table for 52 characters? How big? How long to compute? From the spreadsheet I have it runs out of space to calculate and just renders errors. I'm guessing that even the botnets in current use couldn't do it in any reasonable time frame nor is the storage space available at an affordable price for any but three letter agencies. Am I correct? Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: a new way to build quantum computers?
Steven M. Bellovin wrote: http://www.tgdaily.com/content/view/33425/118/ Ann Arbor (MI) - University of Michigan scientists have discovered a breakthrough way to utilize light in cryptography. The new technique can crack even complex codes in a matter of seconds. Scientists believe this technique offers much advancement over current solutions and could serve to foil national and personal security threats if employed I'll let those who know more physics comment in detail; from reading the article, it appears to lead to a way to construct quantum computers. Which means, if Moore's Law still applies, that in a few years no current code created by one of the three letter agencies will be safe from prying. So what is the statute of limitations on invasion of privacy suits? Or, if it has expired, then me may have proof available that people weren't crying wolf. I've always loved the old saw, Be careful what you wish for, you just might get it. My addendum is that you will probably not like the unintended consequences. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Backdoor Man...
Hi gang, Apparently Backdoor Man is still popular, but not as a blues. http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9025436 So, the question is, can you trust *any* commercial vendor where you can't verify the code? We have no clue if this was done for Intuit itself or if it was done at the request of some _agency_; however, even if it was only done for Intuit it does leave a rather sour taste because this is yet another proof of security by obscurity does not work and will eventually be exploited. BTW, does anyone have a conversion metric for, say John the Ripper on a P4 3GHz with 1GB of memory (or some other commodity level computer) to the tera (soon to be peta it looks like) flop ratings on super computers? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: question re practical use of secret sharing
Actually I worked on a project recently that had this scenario. Paramedic team picks up heart attack/stroke/serious accident patient. The paramedic tending the patient is using a laptop to record EKG or other electronic medical process. Even with the siren on they get in a serious accident that puts the paramedic in a coma due to a concussion. The laptop with the data is broken. At the hospital they yank the hard drive and using an adapter cable mount it on another computer. However, since medical data is considered personal and private data the hard drive is encrypted. The patient, especially if a stroke victim, needs to have his condition understood immediately. Yes, they can do the same tests again, but that does not give them a baseline to compare to: Is the patient getting worse, staying the same, or maybe even improving? With a stroke victim there is a very short window for doing some types of treatment. How do you recover the data? Two solutions were considered, one was secret sharing and the other was StrongAuth's commercial version of the open source StrongKey. The StrongAuth approach was better than the secret sharing but both were way ahead of the next possible choice. The primary reason that the StrongAuth approach would work better is that the medical data would be stored an a folder/partition that every person with the same level of access rights or higher could access the data with their own authentication via a stored certificate. This would mean there would be many people's certificate stored on the drive, but being relatively small this would not pose a problem. The secret sharing was next best because anyone at the hospital could call a central paging system that would page all security people with the number to login to. If enough shares were created - we were thinking 99+ for a major medical system - then the minimum needed - we were thinking three - to recover the key would be available 24/7/366 to generate the needed key to allow access. Both would work, but in this scenario, the local certificate would be faster by several minutes. If StrongAuth did not exist, then the secret sharing approach would be the only approach that could be made to work fast enough. Granted this seems like a corner case, but, trust me, this scenario happens several times a year in the USA. What with medical diagnosis and treatment being pushed closer to the scene of the emergency this is likely to become more common. Except for time critical events, secret sharing is the easiest to deploy and use in a robust way but there are very few, none that I could find, implementations of it that would have enough shares to cover vacations, out of range, and other vagaries of human existence. BTW, on the net is a demo of secret sharing: http://point-at-infinity.org//demo.html Allen Peter Gutmann wrote: Charles Jackson [EMAIL PROTECTED] writes: Is anyone aware of a commercial product that implements secret sharing? If so, can I get a pointer to some product literature? It's available as part of other products (e.g. nCipher do it for keying their HSMs), but I don't know of any product that just does... secret sharing. What would be the user interface for such an application? What would be the target audience? (I mean a real target audience, not some hypothesised scenario). (This is actually a serious question. I talked with some crypto guys a few years ago about doing a standard for secret sharing, but to do that we had to come up with some general usage model for it rather than just one particular application-specific solution, and couldn't). Besides that, user demand for it was practically nonexistent... no, it was completely nonexistent, apart from a few highly specialised custom uses we couldn't even find someone to use as a guinea pig for testing, and the existing specialised users already had specialised solutions of their own for handling it. Peter. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A crazy thought?
Two birds with one shot. :) Ali, Saqib wrote: I am not sure what you are trying to achieve. The CA never has your private key. They are just signing a X.509 certificate that holds your public key. This way they are vouching that that you own the public. Even if you subpoena a CA they won't be able to decrypt any information encrypted with your public key. So having a separation-of-duty is not providing any additional security. Can you please elaborate on you are trying to achieve? I never said that the CA had your private key, only that they could validate an open message came from whomever held the private key associated with a given public key. I like going back to historical instances to illustrate issues because people can read about them from second sources and perhaps get clues about the issue they might not of otherwise. In this case I'll refer to a commonly acknowledged observation that the biggest financial backer of the Communist Party, USA, in the 1950s was the FBI. Another instance of a similar sort is that in many cases during the anti-Vietnam war years, the people advocating violent actions turned out to be paid agents of the FBI and other government agencies. And a third scenario to consider is the capture of German spies by the British and them using them to send both bogus and real intelligence back to their masters. PKI and other similar structures are an attempt to maintain confidentiality between two parties that are not present in the same room while at the same time assure each other that they are indeed talking to who they think they are. In the case of the FBI agents they were not talking to whom they though they were. With the German spies, they were, but the spies had been suborned with threats of the noose if they did not comply. Same problem, two different expressions. How do you trust who you are talking to is the person they represent themselves as? It is almost a side issue whether anyone else is privy to the contents of the conversation, important to prevent misuse and fraud by others, but not central to the first point: Identification. In a private e-mail a suggestion was made that it might be possible for a CA to issue a second certificate alleging it to be yours but in fact it belonged to someone else. In this case which is the real you as represented by the conflicting certificates? Then Ian G wrote: [snip] As a side note, outside the cryptography layer, there are legal, contractual, customary defences against the attacks that you outline. Ah, yes, the rule of law. Well, I think we've seen enough with the Real Innocence Project validating that people are put to death with customary legal processes and that Guantanamo Bay exists to say that if the law is your only protection you need help in a big way if someone gets a burr up their butt about you. My goal in this discussion examine how we can keep the underlying issues clear and utilize tools, like cryptography, to assist us in achieving well founded trust relationships. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A crazy thought?
Jim Dixon wrote: [snip] The CA certifies that X is your public key. ^ Who is you? That is the real question. To leave CAs out for the moment, imagine J. Doe and J. Doe, two different people, each put a public key on a server and you get a message created with a private key. You get the public key and validate it comes from one of the two J. Does. The question is who is the real J. Doe? Is one real and the other a repudiated key? Is one real and the other is trying to steal the identity of the other? Or is it simply that there are, indeed, two people with the same name? Adding a CA merely adds one layer of obfuscation and opportunity for false certification. If the CA starts handing out false public keys - which is the worst that it could do, right? - it will find itself instantly distrusted. Everybody in the world will be able to see that the CA used its private key to sign a false statement. Will they? What evidence do you have that proves the certificate is bogus? Say that the person who is having his identity stolen for whatever purpose discovers that there is a second certificate with his name on it but a different public key, what can he do, yell loudly, No, I'm the real me! How do we know that it isn't someone who is trying to muddy the waters and that the certificate holder is the real person? The offended party need only put the false declaration up on the Web. How many The Boy Who Cried Wolf cases would have to happen before we wouldn't trust *any* public key to represent who we think it does? How will dissident groups keep from getting compromised when fighting oppression? Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
A crazy thought?
Hi Gang, In a class I was in today a statement was made that there is no way that anyone could present someone else's digital signature as their own because no one has has their private key to sign it with. This was in the context of a CA certificate which had it inside. I tried to suggest that there might be scenarios that could accomplish this but was told impossible. Not being totally clear on all the methods that bind the digital signature to an identity I let it be; however, the impossible mantra got me to thinking about it and wondering what vectors might make this possible. Validating a digital signature requires getting the public key from some source, like a CA, or a publicly accessible database and decrypting the signature to validate that the private key associated with the public key created the digital signature, or open message. Which lead me to the thought of trust in the repository for the public key. Here in the USA, there is a long history of behind the scenes cooperation by various large companies with the forces of the law, like the wiretap in the ATT wire room, etc. What is to prevent this from happening at a CA and it not being known for a lengthy period of time? Jurors have been suborned for political reasons, why not CAs? Would you, could you trust a CA based in a country with a low ethics standard or a low regard for human rights? Which lead me to the thought that if it is possible, what could be done to reduce the risk of it happening? It occurred to me that perhaps some variation of separation of duties like two CAs located in different political environments might be used to accomplish this by having each cross-signing the certificate so that the compromise of one CA would trigger an invalid certificate. This might work if the compromise of the CA happened *after* the original certificate was issued, but what if the compromise was long standing? Is there any way to accomplish this? Thoughts? Best to all, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Ross Anderson paper on fraud, risk and nonbank payment systems
Steve Schear wrote: [snip] In real life, following the money is just as important as following the man. It's time for the system to be rebalanced. In fact, I believe, it is even more important because it is the snail trail that connects the people involved. Significant sized anti-social activities are very rarely one-man bands. Given this, rather than requiring proof of identity to open bank accounts, etc, we should encourage transactions through the normal channels in order to better follow the money, if we are truly after criminals. All the extra controls do is force ordinary people who can't, for whatever reason, meet the proof of identity standards to create covert channels to transact their business. These then become the means the real crooks then use to commit whatever it is they do. The best parallels I can think of are Prohibition and the War on Drugs. Look at the total chaos brought on by Prohibition. Fortunately we were wise enough to put a stop to that relatively, for social controls, quickly. The War on Drugs; however, we have not been as smart about, and now, just over 100 years later we are spending multi-billions to bring forth an occasional mouse displayed in screaming headlines. Both Prohibition and the War on Drugs responded to each new general control by creating covert channels for transacting business. The $10,000 alert system created smurfing where deposits were always less. Now that they have instituted controls on transfers of $5,000 or more, guess what? I think you can see the trend. In addition by imposing general controls what they do is spread the work around. The crooks have to hire more people to do the work which creates a mindset in a larger number of people that laws oppress and that you are better off living outside the law. To bring it back to encryption, what are the goals we are trying to achieve by using encryption? Are they goals whereby we create barriers between people? Or are the goals to assist people in creating connections that are secure and enhance trust? The tools themselves are neutral. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Was a mistake made in the design of AACS?
Hal Finney wrote: [snip] http://www.freedom-to-tinker.com/?p= By this point in our series on AACS (the encryption scheme used in HD-DVD and Blu-ray) it should be clear that AACS creates a nontrivial strategic game between the AACS central authority (representing the movie studios) and the attackers who want to defeat AACS. Today I want to sketch a model of this game and talk about who is likely to win... Felten focuses on the loss of revenue due to extraction of device keys and subsequent file sharing of decrypted content. AACS has a mechanism called sequence keys to watermark content and allow it to be traced back to the player that created it. Felten assumes that attackers would publish decrypted movies, AACSLA would then trace them back to the broken device, and revoke that device in future releases. I know I'm in over my head on this so my apologies, but if the key is used in one machine in a product line - Sony DVD players say - then if they find the one machine that it came from and disable it, wouldn't figuring out the key for the next machine in the production run be relatively trivial as the algorithm and hardware implementation used by all machines of a give run be the same? Therefore, couldn't one buy several of them and use them one after another as they are discovered and disabled? So, in order to prevent any of those machines from being used they'd have to disable a whole lot of machines owned by ordinary individuals, right? What are the downside risks for Sony in doing this? What am I missing in this picture? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Was a mistake made in the design of AACS?
Ian G wrote: Hal Finney wrote: Perry Metzger writes: Once the release window has passed, the attacker will use the compromise aggressively and the authority will then blacklist the compromised player, which essentially starts the game over. The studio collects revenue during the release window, and sometimes beyond the release window when the attacker gets unlucky and takes a long time to find another compromise. This seems to assume that when a crack is announced, all revenue stops. This would appear to be false. When cracks are announced in such systems, normally revenues aren't strongly effected. C.f. DVDs. However, the money spent in trying to enforce control comes straight from the bottom line and is therefore limited if they want to stay profitable in the long run. True, they do have deep pockets, but they could be nibbled to death by ducks as they are very big targets and the ducks are small and have wings. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Cryptome cut off by NTT/Verio
Hi Bill, Tried that and got: Your search - http://cryptome.org/cryptome-shut.htm - did not match any documents. I have had some luck at getting the page by attempting several times, but then I got stupid and forgot to save it! Currently I've got http://www.eyeball-series.org/. I'll save it as HTML and send it to anyone who wants it. Does anyone have an alternate e-mail address for John Young? My provider is willing to host and is chp - 500MB/unlimited bandwidth for less than $6/month and unlimited storage/bandwidth for less than $50 a month. Yeah this is a bit of a plug, but I'm not getting anything for it, just letting people know that there are good folk out there. Best, Allen Bill Squier wrote: On Apr 29, 2007, at 11:47 AM, Perry E. Metzger wrote: Slightly off topic, but not deeply. Many of you are familiar with John Young's Cryptome web site. Apparently NTT/Verio has suddenly (after many years) decided that Cryptome violates the ISP's AUP, though they haven't made it particularly clear why. The following link will work for at least a few days I imagine: http://cryptome.org/cryptome-shut.htm It appears to already be dead, but still exists in Google's cache: http://tinyurl.com/yvc8k4 -wps - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
MORE Re: Cryptome cut off by NTT/Verio
Perry E. Metzger wrote: Slightly off topic, but not deeply. Many of you are familiar with John Young's Cryptome web site. Apparently NTT/Verio has suddenly (after many years) decided that Cryptome violates the ISP's AUP, though they haven't made it particularly clear why. The following link will work for at least a few days I imagine: http://cryptome.org/cryptome-shut.htm Okay gang, The URL/URI is http://www.sound-by-design.com/cryptome/Cryptome.htm It has a lot of the shut down stuff down the page a bit. Sorry, no internal links and no images. Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
STILL MORE Re: Cryptome cut off by NTT/Verio
Perry E. Metzger wrote: Slightly off topic, but not deeply. Many of you are familiar with John Young's Cryptome web site. Apparently NTT/Verio has suddenly (after many years) decided that Cryptome violates the ISP's AUP, though they haven't made it particularly clear why. The following link will work for at least a few days I imagine: http://cryptome.org/cryptome-shut.htm Okay gang, I've loaded it at: http://www.sound-by-design.com/cryptome/cryptome-shut.htm Sorry, no images and internal links but at least the bulk is there. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: More info in my AES128-CBC question
Aram Perez wrote: Another response was you haven't heard of anyone breaking SD cards have you? I love responses like this. In the physical world there are the examples of the Kyptonite lock and the Master Combination lock. By the time you hear about the methodology of the attack someone has lost their $16000+ motorcycle or had their wallet with $1000 and identity papers stolen from their gym locker and they really were telling the truth about knowing they locked it up properly. My counter to this sort of response is, How many people are attacking it that you don't know about yet? For one I can almost (not being on staff I can't be absolutely sure) guarantee that the NSA is hard at work at cracking SD cards. Why, you might ask? Simple. What would be the easiest way for a spy to smuggle critical information out of a country? As an ostensible tourist with a camera and multiple SD cards. Even easier would be to give the camera to a real tourist as a gift and then steal it back when they get home. There is a very fine balancing act between confidentiality (or secrecy, if you'd rather) and an open society with accountability. America's existence is partly as a result of people objecting to a Star Chamber legal system and yet the security of democracy resides on having truly secure and private elections that can not be tampered with without it becoming known. This is where cryptography can play a critical role in maintaining trust in our system of governance and protecting people who hold divergent views or beliefs from intimidation. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Additional Re: More info in my AES128-CBC question
Sorry gang. In my response to David I forgot to provide the link to a brief history of ulcers from the CDC which is very interesting from the point of view of how long it takes for experts to accept evidence. http://www.cdc.gov/ulcer/history.htm Have fun. Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Cracking the code?
Hi gang, On recent consulting gig, I came across what I think is a potential vulnerability and wanted to see how crazy my thinking is. Without mentioning the exact place or piece of software because of NDAs, here is the basic scenario. The tool stores the hex version of the remote access password in a field that is visible to the end user. The default setting of the software is that if you enter ASCII into the field, it will calculate the hex version and display it. At this site the sys admins have decided that this is not a user settable field so once set the user can not change it except with the help of an admin. There is also no policy in place to require periodic password changes. Also every user in the entire enterprise has this field visible in their LDAP address information that anyone in the company can access at any time. The address info also contains the user name for logging onto the network. The password for remote access appears to be the same as the password for logging onto the machine even when it it not connected to the domain. Next, trial versions of the software are available that still have the default setting where the user can enter any password and the hex value will be shown. As to the password algorithm itself, I don't know what it is. I don't know if it uses an IV that changes for every password that is entered, but that would be easy to check with the trial version. What research I've done says that it is derived from AES128 and it is a fixed field length. There is more than a bit of security by obscurity at play here. So it seems to me this is vulnerable to a know text attack: i.e., enter known password 1 get back hex value 1, etc. By hand it would take a while to build a list of equivalences, but I assume that a clever perl hacker, which I'm not, could code a widget that would automate this, taking a common dictionary such as from Cain Abel, John the Ripper or some such, and fairly quickly build a list of password/hex pairs. With this list in hand an insider bent on industrial espionage could find the weak passwords of sys admins and logon as them and do whatever nefarious deeds they wish. My questions are: A) is this as vulnerable as it seems at first blush? B) how many password/hex pairs would be needed to deduce the underlying algorithm?, C) If one could deduce the algorithm, could the attack be generalized so that it could be used against other enterprises that use the same software? (It is very(!) widely deployed), and D) am I missing something in my thinking? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: padlocks with backdoors - TSA approved
Hi Hadmut, Welcome to the world of total stupidity. I was in the hardware store the other and looked at those cheap luggage looks and thought about how thieves might be able to utilize the weakness of the system to rip off people, but then..., well I looked at the Master brand, generally a good brand, and a couple of other combination lock brands in the $30 to $45 USD range where you can set the combination to whatever you want. Guess what? They all seemed to use the same key to enable setting the combination. Now, granted, you have to open the lock first then you use the key to release the cylinders to set the combination, but it seems to me that with a little work one could figure out how to bypass the security mechanism to open the lock quickly. Then, too, there are some great lock picking sites on the net that will teach you how to pick even so called security locks. Much like DES slowed people down until they developed the technology to overcome the encryption, locks are only as good as the lack of knowledge that the average crook has. Look up the Kryptonite motorcycle lock that was about $65 USD and a kid in a bike shop figured out how to hack the lock with a $0.19 USD BIC Pen. Lock had been made and sold for twenty plus years with the same weakness in design. That was truly a zero day exploit. Oh, and another story for you on failure in design. We are thinking of re-financing our house. The mortgage company keeps all the personal identifiable data in encrypted form in their offices, but when they send me the quote it's in plain text in an e-mail! Thinking through all aspects of the design and application of a security model is mostly lacking as far as I can tell. Best, Allen Hadmut Danisch wrote: Hi, has this been mentioned here before? I just had my crypto mightmare experience. I was in a (german!) outdoor shop to complete my equipment for my next trip, when I came to the rack with luggage padlocks (used to lock the zippers). While the german brand locks were as usual, all the US brand locks had a sticker Can be opened and re-locked by US luggage inspectors. Each of these (three digit code) locks had a small keyhole for the master key to open. Obviously there are different key types (different size, shape, brand) as the locks had numbers like TSA005 tell the officer which key to use to open that lock. Never seen anything in real world which is such a precise analogon of a crypto backdoor for governmental access. Ironically, they advertise it as a big advantage and important feature, since it allows to arrive with the lock intact and in place instead of cut off. This is the point where I decided to have nightmares from now on. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: data under one key, was Re: analysis and implementation of LRW
Vlad SATtva Miller wrote: Allen wrote on 31.01.2007 01:02: I'll skip the rest of your excellent, and thought provoking post as it is future and I'm looking at now. From what you've written and other material I've read, it is clear that even if the horizon isn't as short as five years, it is certainly shorter than 70. Given that it appears what has to be done is the same as the audio industry has had to do with 30 year old master tapes when they discovered that the binder that held the oxide to the backing was becoming gummy and shedding the music as the tape was playing - reconstruct the data and re-encode it using more up to date technology. I guess we will have grunt jobs for a long time to come. :) I think you underestimate what Travis said about ensurance on a long-term encrypted data. If an attacker can (and it is very likely) now obtain your ciphertext encrypted with a scheme that isn't strong in 70-years perspective, he will be able to break the scheme in the future when technology and science allows it, effectively compromising [part of] your clients private data, despite your efforts to re-encrypt it later with improved scheme. The point is that encryption scheme for long-term secrets must be strong from the beginning to the end of the data needed to stay secret. Imagine this, if you will. You have a disk with encrypted data and the key to decrypt it. You can take two paths that I can see: 1. Encrypt the old data and its key with the new, more robust, encryption algorithm and key as you migrate it from the now aged HD which is nearing the end of its lifespan. Then use the then current disk wiping technology of choice to destroy the old data. I think a blast furnace might be a great choice for a long time to come. 2. Decrypt the data using the key and re-encrypt it with the new algorithm using a new key, then migrate it to a new HD. Afterward destroy the old drive/data by your favorite method at the time. I still like the blast furnace as tool of choice. Both approaches suffer from one defect in common - there is the assumption that the old disk you have the data on is the only copy in existence, clearly a *bad* idea if you should have a catastrophic failure of the HD or other storage device, so then it boils down to finding all known and unknown copies of the encrypted data and securely destroying them as well. Not a safe assumption as we know from looking at the history of papers dug up hundreds of years after the original appears to be lost forever. Approach 1 also suffers from the problem that we may not have the software readily available waaay down the road to decrypt the many layers of the onion. And that will surely bring tears to our eyes. Since we know that we can not protect against future developments in cryptanalysis - just look at both linear and differential analysis versus earlier tools - how do we create an algorithm that is proof against the future? Frankly I don't think it is possible and storing all those one-time pads is too much of a headache, as well as risky, to bother with. So what do we do? This is where I think we need to set our sights on ...good enough given what we know now This does not mean sloppy thinking, just that at some point you have done the best humanly possible to assess and mitigate risks. Anyone got better ideas? Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Entropy of other languages
Hi gang, An idle question. English has a relatively low entropy as a language. Don't recall the exact figure, but if you look at words that start with q it is very low indeed. What about other languages? Does anyone know the relative entropy of other alphabetic languages? What about the entropy of ideographic languages? Pictographic? Hieroglyphic? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Private Key Generation from Passwords/phrases
Alexander Klimov wrote: [snip] (Of course, with 60K passwords there is almost for sure at least one password1 or Steven123 and thus the salts are irrelevant.) I'm not sure I understand this statement as I just calculated the HMAC MD5 for password1 using a salt of 7D00 (32,000 decimal) and got the result of 187de1db3348592a3595905a66cae418. Then I calculated the MD5 with a salt of 61A8 (25,000 decimal) and got a result of 9cad6ac9fd6c09fd8e99e478381f. Are you saying that the salt is irrelevant because a dictionary attack is fast and common dictionary words would allow an easy attack? Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: analysis and implementation of LRW
David Wagner wrote: [snip] Another possible interpretation of (2) is that if you use LRW to encrypt close to 2^64 blocks of plaintext, and if you are using a 128-bit block cipher, then you have a significant chance of a birthday collision, Am I doing the math correctly that 2^64 blocks of 128 bits is 2^32 bytes or about 4 gigs of data? Or am I looking at this the wrong way? If 4 gigs is right, would it then be records to look for to break the code via birthday attacks would be things like seismic data, which tend to be very large. Feed a known file in and look at the output and use that to find the key for the unknown files? As you can tell, my interests are often the vectors, not the exact details of how to achieve the crack. Currently I'm dealing with very large - though not as large as 4 gig - x-ray, MRI, and similar files that have to be protected for the lifespan of the person, which could be 70+ years after the medical record is created. Think of the MRI of a kid to scan for some condition that may be genetic in origin and has to be monitored and compared with more recent results their whole life. Thanks, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Attacking the hash (WAS: Private Key Generation from Passwords/phrases)
Hi gang, As an outsider, sort of, looking in I had an interesting thought about this. Since insider threats are the biggest problem, what vector could an insider use against password hashes to gain root password access? The problem with Rainbow tables is that they would be too massive when the salt was 4096 to be practical unless you had the power of NSA or an equivalent supporting your efforts. However, what about attacking the salt? How good is the PRNG for the salt? Is it at all predictable? Here is one approach that might work. Keep entering the same password(s) and collecting the resultant hashes until you get several duplicates. Then analyze the results to see if there is a pattern to the repetition that would allow for a birthday attack against the salt that would allow an attack against the root password hash or other administrative rights password hashes that could be collected. I suspect this would be somewhat difficult to code but once done almost the entire attack could be done off-line on a machine that uses the same password hash creation mechanism so you wouldn't trigger an IDS or similar audit process on the network under attack. Given the long history of industrial espionage in the corporate world I'm sure that there are probably small teams working to collect information that have somewhat more resources than an individual or outsider group might have, making the effort required feasible. Thoughts? Best, Allen Leichter, Jerry wrote: | ...One sometimes sees claims that increasing the salt size is important. | That's very far from clear to me. A collision in the salt between | two entries in the password file lets you try each guess against two | users' entries. Since calculating the guess is the hard part, | that's a savings for the attacker. With 4K possible salts, you'd need a [snipped] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Private Key Generation from Passwords/phrases
Joseph, The whole issue of entropy is a bit vague for me - I don't normally work at that end of things - so could you point to a good tutorial on the subject, or barring having a reference handy, could you give an overview? Thanks, Allen Joseph Ashwood wrote: - Original Message - From: Matthias Bruestle [EMAIL PROTECTED] Subject: Private Key Generation from Passwords/phrases What do you think about this? I think you need some serious help in learning the difference between 2^112 and 112, and that you really don't seem to have much grasp of the entire concept. 112 bits of entropy is 112 bits of entropy, not 76 bits of entropy, 27 bits of bull, 7 bits of cocaine, and a little bit of alcohol, and the 224 bits of ECC is approximate anyway, as you noted the time units are inconsistent. Basically just stop fiddling around trying to convince yourself you need less than you do, and locate 112 bits of apparent entropy, anything else and you're into the world of trying to prove equivalence between entropy and work which work in physics but doesn't work in computation because next year the work level will be different and you'll have to redo all your figures. Joe - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: It's a Presidential Mandate, Feds use it. How come you are not using FDE?
Saqib Ali wrote: Since when did AES-128 become snake-oil crypto? How come I missed that? Compusec uses AES-128 . And as far as I know AES is NOT snake-oil crypto Saqib, I believe you are correct as to the algorithm, but the snake-oil is in the implementation, As I have often said, A misplaced comma in an English sentence will merely get you a bad reputation as a writer, however, a misplaced comma in a nuclear weapons project may leave an enduring mark on the world. Algorithms can be perfect and implementation sloppy. If you can review the code you might find the problem, but with proprietary code, fergetit. Closed-source doesn't mean that it is snake-oil. If that was the case, the Microsoft's EFS, and Kerberos implementation would be snake oil too. As I recall there have been a few problems with Kerberos in the past. Best, Allen - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
(Short) Intro and question
Hi everyone, I'm Allen Schaaf and I'm primarily an information security analyst - I try to look at things like a total stranger and ask all the dumb questions hoping to stumble on one or two that hadn't been asked before that will reveal a potential risk. I'm currently consulting at a very large HMO and finding that there are lots of questions that have not been asked so I'm having fun. One of the questions that I have been raising is trust and how to ensure that that it is not misplaced or eroded over time. Which leads me to my question for the list: I can see easily how to do split key for 2 out of x for key recovery, but I can't seem to find a reference to the 3 out of x problem. In case I have not been clear enough, it is commonly known that it is harder to get collusion when three people need to act together than when there are just two. For most encryption 2 out x is just fine, but some things need a higher level of security than 2 out of x can provide. Thanks for any tips, ideas, solutions, or pointers. Allen Schaaf Information Security Analyst Certified Network Security Analyst and Intrusion Forensics Investigator - CEH, CHFI Certified EC-Council Instructor - CEI Security is lot like democracy - everyone's for it but few understand that you have to work at it constantly. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Security Implications of Using the Data Encryption Standard (DES)
Leichter, Jerry wrote: | note that there have been (at least) two countermeasures to DES brute-force | attacks ... one is 3DES ... and the other ... mandated for some ATM networks, | has been DUKPT. while DUKPT doesn't change the difficulty of brute-force | attack on single key ... it creates a derived unique key per transaction and | bounds the life-time use of that key to relatively small window (typically | significantly less than what even existing brute-force attacks would take). | The attractiveness of doing such a brute-force attack is further limited | because the typical transaction value is much less than the cost of typical | brute-force attack Bounds on brute-force attacks against DESX - DES with pre- and post-whitening - were proved a number of years ago. They can pretty easily move DES out of the range of reasonable brute force attacks, especially if you change the key reasonably often (but you can safely do thousands of blocks with one key). One can apply the same results to 3DES. Curiously, as far as I know there are to this day no stronger results on the strength of 3DES! I find it interesting that no one seems to have actually made use of these results in fielded systems. Today, we can do 3DES at acceptable speeds in most contexts - and one could argue that it gives better protection against unknown attacks. But it hasn't been so long since 3DES was really too slow to be practical in many places, and straight DES was used instead, despite the vulnerability to brute force. DESX costs you two XOR's - very cheap for what it buys you. The IETF/IESG refused to publish the ESP DES-XEX3-CBC Transform submitted as draft-ietf-ipsec-ciph-desx-00 (1997) and draft-simpson-desx-01 and draft-simpson-desx-02 (1998). Of course, they also refused to publish draft-simpson-des-as-00 (1998) and draft-simpson-des-as-01 (1999) that deprecated DES -- despite strong votes of support at SAAG and PPP meetings. There was an Appeal of IESG inaction, decisions of 13 Oct 1999 and 16 Feb 1999. http://www1.ietf.org/mail-archive/web/ietf/current/msg11160.html The NSA and Cisco folks that were involved in IKE/ISAKMP advocated DES, refusing to assign code points for DESX. Gosh, I wonder why - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
IKE resource exhaustion at 2 to 10 packets per second
http://www.nta-monitor.com/posts/2006/07/cisco-concentrator-dos.html The vulnerability allows an attacker to exhaust the IKE resources on a remote VPN concentrator by starting new IKE sessions faster than the concentrator expires them from its queue. By doing this, the attacker fills up the concentrator's queue, which prevents it from handling valid IKE requests. The exploit involves sending IKE Phase-1 packets containing an acceptable transform. It is not necessary to have valid credentials in order to exploit this vulnerability, as the problem occurs before the authentication stage. The vulnerability affects both Main Mode and Aggressive Mode, and both normal IKE over UDP and Cisco proprietary TCP-encapsulated IKE. In order to exploit the vulnerability, the attacker needs to send IKE packets at a rate which exceeds the Concentrator's IKE session expiry rate. Tests show that the target concentrator starts to be affected at a rate of 2 packets per second, and is becomes unusable at 10 packets per second. As a minimal Main Mode packet with a single transform is 112 bytes long, 10 packets per second corresponds to a data rate of slightly less than 9,000 bits per second. ... The vulnerability was first discovered on 4th July 2005, and was reported to Cisco's security team (PSIRT) the same day. Cisco responded on 9th August 2005, but no further progress has been made, over a year after finding the flaw. Gosh and golly gee, how could this vulnerability slip past them without anybody noticing? ... other than the person posting an internet-draft that the IESG refused to publish as an RFC, that was instead published in ;login: December 1999. ... that attack threat was mentioned in the design principles of Photuris circa 1995, that the IESG also refused to publish until after the NSA-originated and approved IKE/ISAKMP protocol. It's particularly amusing that Photuris was overwhelmingly approved in a straw poll conducted by John Gilmore at the 36th IETF in Montreal, 1996, but Cisco issued a press release that they had chosen the NSA-designed protocol instead. Protocol adoption by press release, such a good choice. They just had the 66th IETF in Montreal a week ago. Full circle. Anybody ready to order Photuris from your vendors? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: NSA knows who you've called.
Perry E. Metzger wrote: http://www.usatoday.com/news/washington/2006-05-10-nsa_x.htm Legal analysis from Center for Democracy and Technology at: http://www.cdt.org/publications/policyposts/2006/8 -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: what's wrong with HMAC?
Hal Finney wrote: Travis H. writes: Ross Anderson once said cryptically, HMAC has a long story attched to it - the triumph of the theory community over common sense He wouldn't expand on that any more... does anyone have an idea of what he is referring to? I might speculate, based on what you write here, that he believed that the simpler, ad hoc constructions often used in the days preceding HMAC were good enough in practice, and that the theoretical proofs of security for HMAC were given too much weight. The original HMAC paper is at http://www-cse.ucsd.edu/~mihir/papers/kmd5.pdf and the authors show in section 6 various attacks on ad hoc constructions, but some of them are admittedly impractical. Actually, that paper really describes version-2 (or even version-N) of HMAC, as the original design paper had some serious flaws. And the other constructions were not so much /ad hoc/ (they had been proposed by various established security folks with varying amounts of accompanying math) as *incompletely analyzed*. A part of the problem is that independent analysis wasn't forthcoming until long after implementation. The problem wasn't considered enough of a hot topic at the time. Another part of the problem was that the publication lag of RFCs was (is) so ridiculously long. The envelope method published in RFC 1828 was a variant of the original developed as part of the IPv6 design circa 1993: key, fill, datagram, key, fill but had been replaced circa 1995 by IP-MAC (in Photuris): key, fill, datagram, fill, key, fill yet was not officially published (due to politics) for MD5 until: * RFC 2522, Photuris: Session-Key Management Protocol, March 1999. and SHA1 even later (took so long it was published as Historic): * RFC 2841, IP Authentication using Keyed SHA1 with Interleaved Padding (IP-MAC), November 2000. Filling (padding to the natural block boundary of the algorithm) was/is accomplished by the usual M-D strengthening technique. I had a preliminary paper showing that the nested N-MAC/H-MAC design was actually *weaker* than envelope style IP-MAC, but at the request of some colleagues saved it for a book they were putting together. Sadly, that book was never published. The basic problem is that the nested method truncates the internal chaining variables, while the envelope method preserves them and truncates only upon final output. Of course, AFAICT, the trailing key makes the various recent attacks on MD5 and SHA1 entirely inapplicable. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Linux RNG paper
Had a bit of time waiting for a file to download, and just read the paper that's been sitting on my desktop. The analysis of the weakness is new, but sadly many of the problems werre already known, and several previously discussed on this list! The forward secrecy problem was identified circa 1995 by Phil Karn, who therefore saved the changed state after generating each random key -- something similar to the paper's suggestion. The lack of jitter in millisecond event time was also identified by Karn, and he developed i386 code to determine microseconds from processor timing. Sorry, I cannot remember whether it only worked on 386 and above, or also 186/286 we were using in cell phones at the time. But I certainly used it in a number of routers over the years We also noticed the event jitter was more important for unpredictability than the actual event values, and all my code just added the value to the microsecond time. The code was fast enough to handle very rapid interrupt time events by leaving complex functions for later. This assumes a cryptographically strong output function will sufficiently hash the bits that calculating and saving the jitter itself is a waste of effort. We also always used any network checksum that came across the transom, including packets, IP, UDP, and TCP. Yes, it is externally visible, but the microsecond time is not, and adding them makes the actual pool values less predictable (although within a constrained range). Also, rather than deciding the pool was full of entropy, we just kept XOR'ing the new values with the old, as a circular buffer (again similar to the paper's suggestion). Finally, a lot of this was discussed in public, and both Karn's and my code variants were publicly available. I don't have my old email backups online, but I'm sure it was discussed at places such as the tcp-group and ipsec circa 1995. After the first Yarrow draft, it was discussed on the old linux-ipsec list circa 1999 April 22, and on this list circa 1999 August 17. After much discussion, Theodore Y. Ts'o wrote ([EMAIL PROTECTED]): Date: Sun, 15 Aug 1999 10:00:01 -0400 From: William Allen Simpson [EMAIL PROTECTED] Catching up, and after talking with John Kelsey and Sandy Harris at SAC'99, it seems clear that there is some consensus on these lists that the semantics of /dev/urandom need improvement, and that some principles of Yarrow should be incorporated. I think that most posters can be satisfied by making the functionality of /dev/random and /dev/urandom more orthogonal. Bill, you're not the IETF working group chairman on /dev/random, and /dev/random isn't a working group subject to consensus. I'm the author, with the sole responsibility to make decisions about what's best for the device driver. Of course, if someone else wants to make an alternative /dev/random driver, they're free to use it in their system. They can even petition Linus Torvalds to replace theirs with mine, although I doubt they'd get very far. Unfortunately, the fact that Linux remains vulnerable to the iterative guessing attack was really due to Ted's intransigence, and some personal relationship that he enjoys with Linus. Thank you for the independent analysis once again bringing this topic to everybody's attention. Hard to believe that another 7 years have passed. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
Florian Weimer wrote: Even back then, the integer encoding was considered to be a mistake. | I concur completely. I once got so fed up with this habit that I | tromped around the office singing, Every bit is sacred / Every bit | is great / When a bit is wasted / Phil gets quite irate. | Ah yes, Phil really _is_ like that, but then he was often working with 2400 bps satellites and ARRL links. Did bring a smile :-) But as another point, Phil was against having length fields in most cases. The transform parameters started as a list of single bytes, but the working group (a misnomer) insisted on length fields. I remember Phil slumping down in his seat. I convinced him we could treat them all as 2 byte constants, since the length didn't actually vary. ;-) | Consider this to be one of the prime things to correct. Personally, | I think that numbers should never (well, hardly ever) be smaller | than 32 bits. (Jon Callas, 1997-08-08) Ah yes, a couple of years after Photuris. And wasn't Jon the _author_ of the PGP variable length integer specification? Hoisted on his petard? I did use all 32-bit length fields in RADIUS. Different environment. And finally, the reason for the extra specification of an extension beyond 32-bit lengths was provided by an obscure fellow by the name of Rivest. He argued (insisted vehemently) that security specifications should take all possible future-proofing precautions, even though we currently don't see the need, so that the specification is _never_ ambiguous. (I'm paraphrasing, he was far more loquacious.) Variable-length integers within other fields, for example. You can't avoid this phenomenon in its entirety, of course, without sacrificing some of the advantages of a binary encoding. There aren't any. I could, and did. Have you actually read all (or any) of the specifications? I like ISAKMP as much as the next guy, but somehow I doubt that simpler protocols necessarily lead to more robust software. Sure, less effort is needed to implement them, but writing robust code still comes at an extra cost. *sigh* It's a sad day when reliability greater than provided by M$ or Netscape is considered extra cost. I've always believed robust security merits the same attention to detail that is needed in a device driver. And I came of age in programming communication device drivers when there was no guarantee that the backplane would successfully carry the interrupt saying a byte had been transferred, so you had a software timer to initiate a task to separately query the hardware for lost characters or overruns. And another hardware (watch timer) interrupt just in case the software got stuck. IBM 1800. Alpha Micro. HP 21MX. IBM PC PIC. Zilog. Embedded process control. Electronically and physically noisy factory floor environments. But it helps a lot when the specification is written hand-in-hand with the code, so that every opportunity is taken to simplify the code. So, where is the community to replace ISAKMP with something more robust? Provos' Photuris code could be running on all the BSDs in a few months. Maybe sooner, were payment involved. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
Florian Weimer wrote: Photuris uses a baroque variable-length integer encoding similar to that of OpenPGP, a clear warning sign. 8-/ On the contrary: + a VERY SIMPLE variable-length integer encoding, where every number has EXACTLY ONE possible representation (unlike ASN.1 which even the spell-checker wants to replace with assinine). + similar to that of OpenPGP, the most common Open Source security software of the era, where the code could be easily reused (as it was in the initial implementation). The protocol also contains nested containers which may specify conflicting lengths. This is one common source of parser bugs. On the contrary, where are internal nested containers in the protocol? However, as most things that cross the INTER-net, the packets are encapsulated in UDP, IP, and some media frame, all of which may have their own length. That why there are copious implementation notes, saying for example: When processing datagrams containing variable size values, the length must be checked against the overall datagram length. An invalid size (too long or short) that causes a poorly coded receiver to abort could be used as a denial of service attack. I remember some observers complaining about the 17 warnings concerning comparing the variable length to the UDP length, saying it cluttered the specification. I remember some implementers cheering about the 17 warnings concerning comparing the variable length to the UDP length, saying it helped clarify the specification as they wrote the code. I defy you to find an INTER-net protocol without RTP/TCP/UDP, IP, and media framing At the time, I only had 17 years of protocol implementation experience. Another decade later, it still seems (to me) one of my better efforts. Again, the ISAKMP flaws were foreseeable and avoidable. And Photuris was written before the existence of ISAKMP. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
Paul Hoffman wrote: At 2:29 PM -0500 11/15/05, Steven M. Bellovin wrote: I mostly agree with you, with one caveat: the complexity of a spec can lead to buggier implementations. Well, then we fully agree with each other. Look at the message formats used in the protocols they have attacked successfully so far. Humorously, security folks seem to have ignored this when designing our protocols. Later, Peter Gutmann wrote: In this particular case if the problem is so trivial and easily avoided, why does almost every implementation (according to the security advisory) get it wrong? Quoting draft-simpson-danger-isakmp-01.txt, published (after being blocked by the IETF for years) as: http://www.usenix.org/publications/login/1999-12/features/harmful.html A great many of the problematic specifications are due to the ISAKMP framework. This is not surprising, as the early drafts used ASN.1, and were fairly clearly ISO inspired. The observations of another ISO implementor (and security analyst) appear applicable: The specification was so general, and left so many choices, that it was necessary to hold implementor workshops to agree on what subsets to build and what choices to make. The specification wasn't a specification of a protocol. Instead, it was a framework in which a protocol could be designed and implemented. [Folklore-00] [Folklore-00] Perlman, R., Folklore of Protocol Design, draft-iab-perlman-folklore-00.txt, Work In Progress, January 1998. Quoting Photuris: Design Criteria, LNCS, Springer-Verlag, 1999: The hallmark of successful Internet protocols is that they are relatively simple. This aids in analysis of the protocol design, improves implementation interoperability, and reduces operational considerations. Compare with Photuris [RFC-2522], where undergraduate (Keromytis) and graduate (Spatscheck, Provos) students independently were able to complete interoperable implementations (in their spare time) in a month or so So, no, some security folks didn't ignore this ;-) -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
European country forbids its citizens from smiling for passport photos
Do you really need to click on this link to know which one it is? http://cbs5.com/watercooler/watercooler_story_258152613.html I guess we should give neutral facial expressions for the photo, then smile (or frown) while in the airport Sounds like the technology (still) isn't ready for prime time. (seen at http://isthatlegal.org) -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: The summer of PKI love
James A. Donald wrote: -- From: Stephan Neuhaus [EMAIL PROTECTED] So, the optimism of the article's author aside, where *do* we stand on PKI deployment? PKI's deployment to identify ssl servers is near one hundred percent. PKI's deployment to sign and secure email, and to identify users, is near zero and seems unlikely to change. PGP has substantially superior penetration. I would rank it closer to 0% myself. Don't get me wrong, we have plenty of PK deployment with SSL servers, just no I. Anyone doing revocation checking? How do you even do it? CRL? Delta CRL? OSCP? Do any browsers really support these things? For those that do does any user actually know how to do it? PKI is a massive undertaking that many seem to confuse with just public key cryptography. Public key crypto is just one component of PKI, and frankly I know VERY few groups that are actually doing PKI and doing it right. What we have are a couple dozen certificate authorities that were deemed trustworthy by Microsoft that do not pop up warnings, and the rest that do pop up warnings that most people blissfully ignore. HTTPS is really good for encryption, absolutely sucks in practice for trust. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University KB3LYB smime.p7s Description: S/MIME Cryptographic Signature
Re: [Clips] Does Phil Zimmermann need a clue on VoIP?
I've personally designed and deployed many PKI solutions for large corporations for all sorts of security applications ranging from remote VPN access to wireless LAN security, and I can attest that the technology is simple, scalable, and reliable. *yawn* Yet another person who confuses PK with PKI. Almost NOBODY has ever done PKI right. The I is the part everyone conveniently forgets when they claim otherwise. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University KB3LYB smime.p7s Description: S/MIME Cryptographic Signature
Re: encrypted tapes (was Re: Papers about Algorithm hiding ?)
Steven M. Bellovin wrote: The bigger issue, though, is more subtle: keeping track of the keys is non-trivial. These need to be backed up, too, and kept separate from (but synchronized with) the tapes. Worse yet, they need to be kept secure. That may mean storing the keys with a different escrow company. A loss of either piece,the tape or the key, renders the backup useless. Basically, expensive or not, security is very hard to get right. When you look at Choicepoint, Bank of America, and Citigroup (not to mention universities and smaller businesses) they have little to no incentive to keep your personal data secure. YOU bear the cost of data compromise, not them. The worst they get is some bad publicity and only if it affects CA residents, otherwise it can be kept quiet. The threat of bad publicity does not mean much when next week your compromise due to bad security will be forgotten as the media switches to the next one. As it stands today, the cost/benefit analysis easily directs them away from taking strong measures to protect customer's financial data. Doing so is time consuming, opens up potential for problems, and gets them next to nothing in return. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University Lt Commander Centre County Sheriff's Office Search and Rescue KB3LYB smime.p7s Description: S/MIME Cryptographic Signature
Re: Dell to Add Security Chip to PCs
Trei, Peter wrote: It could easily be leveraged to make motherboards which will only run 'authorized' OSs, and OSs which will run only 'authorized' software. And you, the owner of the computer, will NOT neccesarily be the authority which gets to decide what OS and software the machine can run. If you 'take ownership' as you put it, the internal keys and certs change, and all of a sudden you might not have a bootable computer anymore. Goodbye Linux. Goodbye Freeware. Goodbye independent software development. It would be a very sad world if this comes to pass. Yes it would, many governments are turning to Linux and other freeware. Many huge companies make heavy use of Linux and and freeware, suddenly losing this would have a massive effect on their bottom line and possibly enough to impact the economy as a whole. Independent software developers are a significant part of the economy as well, and most politicians do not want to associate themselves with the concept of hurting small business. Universities and other educational institutions will fight anything that resembles what you have described tooth and nail. To think that this kind of technology would be mandated by a government is laughable. Nor do I believe there will be any conspiracy on the part of ISPs to require to in order to get on the Internet. As it stands now most people are running 5+ year old computer and windows 98/me, I doubt this is going to change much because for most people, this does what they want (minus all the security vulnerabilities, but with NAT appliances those are not even that big a deal). There is no customer demand for this technology to be mandated, there is no reason why an ISP or vendor would want to piss off significant percentages of their clients in this way. The software world is becoming MORE open. Firefox and Openoffice are becoming legitimate in the eyes of government and businesses, Linux is huge these days, and the open source development method is being talked about in business mags, board rooms, and universities everywhere. The government was not able to get the Clipper chip passed and that was backed with the horror stories of rampant pedophilia, terrorism, and organized crime. Do you honestly believe they will be able to destroy open source, linux, independent software development, and the like with just the fear of movie piracy, mp3 sharing, and such? Do you really think they are willing to piss off large sections of the voting population, the tech segment of the economy, universities, small businesses, and the rest of the world just because the MPAA and RIAA don't like customers owning devices they do not control? It is entirely possibly that a machine like you described will be built, I wish them luck because they will need it. It is attempted quite often and yet history shows us that there is really no widespread demand for iOpeners, WebTV, and their ilk. I don't see customers demanding this, therefor there will probably not be much of a supply. Either way, there is currently a HUGE market for general use PCs that the end user controls, so I imagine there will always be companies willing to supply them. My primary fear regarding TCPA is the remote attestation component. I can easily picture Microsoft deciding that they do not like Samba and decide to make it so that Windows boxes simply cannot communicate with it for domain, filesystem, or authentication purposes. All they need do is require that the piece on the other end be signed by Microsoft. Heck they could render http agent spoofing useless if they decide to make it so that only IE could connect to ISS. Again though, doing so would piss off a great many of their customers, some of who are slowly jumping ship to other solutions anyway. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University smime.p7s Description: S/MIME Cryptographic Signature
Re: Simson Garfinkel analyses Skype - Open Society Institute
Adam Shostack wrote: I hate arguing by analogy, but: VOIP is a perfectly smooth system. It's lack of security features mean there isn't even a ridge to trip you up as you wiretap. Skype has some ridge. It may turn out that it's very very low, but its there. Even if that's just the addition of an openssl decrypt line to a reconstruct shell script. In that case, the value of 'better' is vanishingly small, but it will still take an attacker at least 5 minutes to figure that out. I would contend that a false sense of security is worse than no security at all. Someone's behavior may be different if they are wrongfully assuming that their communications are encrypted by what they believe is strong encryption when if fact it may be very very low. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University smime.p7s Description: S/MIME Cryptographic Signature
Re: entropy depletion
Ian G wrote: The *requirement* is that the generator not leak information. This requirement applies equally well to an entropy collector as to a PRNG. Now here we disagree. It was long my understanding that the reason the entropy device (/dev/random) could be used for both output and input, and blocked awaiting more entropy collection, was the desire to be able to quantify the result. Otherwise, there's no need to block. For an entropy collector there are a number of ways of meeting the requirement. 1. Constrain access to the device and audit all users of the device. 2. set the contract in the read() call such that the bits returned may be internally entangled, but must not be entangled with any other read(). This can trivially be met by locking the device for single read access, and resetting the pool after every read. Slow, but it's what the caller wanted! Better variants can be experimented on... Now I don't remember anybody suggesting that before! Perfect, except that an attacker knows when to begin watching, and is assured that anything before s/he began watching was tossed. In my various key generation designs using MD5, I've always used MD-strengthening to minimize the entanglement between keys. There was MD5 code floating around for many many years that I wrote with a NULL argument to force the MD-strengthening phase between uses. I never liked designs with bits for multiple keys extracted from the same digest iteration output. And of course, my IPsec authentication RFCs all did the same. See my IP-MAC design at RFC-1852 and RFC-2841. We are still left with the notion as Bill suggested that no entropy collector is truly clean, in that the bits collected will have some small element of leakage across the bits. But I suggest we just cop that one on the chin, and stick in the random(5) page the description of how reliable the device meets the requirement. (This might be a resend, my net was dropping all sorts of stuff today and I lost the original.) That's OK, the writing was clearer the second time around. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: entropy depletion
Ian G wrote: (4A) Programs must be audited to ensure that they do not use /dev/random improperly. (4B) Accesses to /dev/random should be logged. I'm confused by this aggresive containment of the entropy/random device. I'm assuming here that /dev/random is the entropy device (better renamed as /dev/entropy) and Urandom is the real good PRNG which doesn't block post-good-state. Yes, that's my assumption (and practice for many years). If I take out 1000 bits from the *entropy* device, what difference does it make to the state? It has no state, other than a collection of unused entropy bits, which aren't really state, because there is no relationship from one bit to any other bit. By definition. They get depleted, and more gets collected, which by definition are unrelated. If we could actually get such devices, that would be good. In the real world, /dev/random is an emulated entropy device. It hopes to pick up bits and pieces of entropy and mashes them together. In common implementations, it fakes a guess of the current level of entropy accumulated, and blocks when depleted. If there really were no relation to the previous output -- that is, a _perfect_ lack of information about the underlying mechanism, such as the argument that Hawking radiation conveys no information out of black holes -- then it would never need to block, and there would never have been a need for /dev/urandom! (Much smarter people than I have been arguing about the information theoretic principles of entropy in areas of physics and mathematics for a very long time.) All I know is that it's really hard to get non-externally-observable sources of entropy in embedded systems such as routers, my long-time area of endeavor. I'm happy to add in externally observable sources such as communications checksums and timing, as long as they can be mixed in unpredictable ways with the internal sources, to produce the emulated entropy device. Because it blocks, it is a critical resource, and should be logged. After all, a malicious user might be grabbing all the entropy as a denial of service attack. Also, a malicious user might be monitoring the resource, looking for cases where the output isn't actually very random. In my experience, rather a lot of supposed sources of entropy aren't very good. Why then restrict it to non-communications usages? Because we are starting from the postulate that observation of the output could (however remotely) give away information about the underlying state of the entropy generator(s). What does it matter if an SSH daemon leaks bits used in its *own* key generation if those bits can never be used for any other purpose? I was thinking about cookies and magic numbers, generally transmitted verbatum. However, since we have a ready source of non-blocking keying material in /dev/urandom, it seems to be better to use that instead of the blocking critical resource -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
US Court says no privacy in wiretap law
Switches, routers, and any intermediate computers are fair game for warrantless wiretaps. That is, at any time (the phrase seconds or mili-seconds [sic]) that the transmission is not actually on a wire. Most important, read the very nicely written dissent. The dissenting judge used the correct terms, referenced RFCs, and in general knew what he was talking about -- unlike the 2:1 majority! http://www.ca1.uscourts.gov/pdf.opinions/03-1383-01A.pdf ... Under Councilman's narrow interpretation of the Act, the Government would no longer need to obtain a court-authorized wiretap order to conduct such surveillance. This would effectuate a dramatic change in Justice Department policy and mark a significant reduction in the public's right to privacy. Such a change would not, however, be limited to the interception of e-mails. Under Councilman's approach, the government would be free to intercept all wire and electronic communications that are in temporary electronic storage without having to comply with the Wiretap Act's procedural protections. That means that the Government could install taps at telephone company switching stations to monitor phone conversations that are temporarily stored in electronic routers during transmission. [page 51-52] As this is a US Court of Appeals, it sets precedent that other courts will use, and directly applies to all ISPs in the NE US. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Feds admit error in hacking conviction
http://news.com.com/2100-7348_3-5092697.html?tag=st_lh Federal prosecutors asked a San Francisco appeals court this week to reverse a computer-crime conviction that punished a California man for notifying a company's customers of a flaw in the company's e-mail service. Filed on Tuesday in San Francisco's Ninth District Court of Appeals, the unusual request conceded that federal prosecutors in Los Angeles erred in bringing a criminal case against, and obtaining the conviction of, 30-year-old Bret McDanel. The one-time system administrator has already served his 16-month sentence and is currently on supervised release, during which time his access to computers is curtailed. ... If the court agrees to overturn the conviction, it will remove a precedent that could have squelched the research of many security experts. The original conviction by U.S. District Judge Lourdes G. Baird determined that, by revealing a flaw in a system's security, a researcher could be accused of harming the system, a violation of computer crime laws. ... Thom Mrozek, a spokesman for the U.S. attorney's office for the Central District of California said that prosecutors rarely ask for a reversal. It's pretty damn rare, he said. I have never seen it happen. ... -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Reliance on Microsoft called risk to U.S. security
Jeroen C.van Gelderen wrote: On Saturday, Sep 27, 2003, at 15:48 US/Eastern, [EMAIL PROTECTED] wrote: You have not met my users! Indeed, but I'm here to learn :) ... something is wrong. Why would she click YES? ... Because I'm an optimist I believe that Alice will read the dialog and err on the side of caution. Maybe that isn't realistic. ... I agree that such composition must be intuitive or we cannot expect it to work. I think that CapDesk is a nice publicly available prototype of a workable capability desktop. It would be very interesting to see your assessment on whether a CapDesk approach would be workable for your users. And if it isn't, why not. I hope you can lend your experience. OK, I'll lend mine. With my ISP hat on, the vast majority of support calls have to do with users ignoring the content of M$ dialog boxes, hitting YES or OK, then calling when things don't work. Admittedly, the text in those dialog boxes isn't particularly useful. But this costs us a lot of good old hard cash. Or with my personal hat, my 15-year-old niece had an infected machine. Actually a multiply infected machine. Took me several hours to clean up. And then I watched her check her yahoo mail, and click yes on the very next Norton/McAfee dialog box, reinfecting her Comcast connected machine before my very eyes. Why, I asked? I just spent a lot of time fixing your machine, and explained what had gone wrong. She says, That message came from my best friend at school. Of course it didn't. But it probably came from another friend with them both in the address book. And social engineering is a lot more powerful than any amount of training, no matter how very recent! The answer to a technical problem is _not_ depending on user caution! -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Attacking networks using DHCP, DNS - probably kills DNSSEC
Steven M. Bellovin wrote: In message [EMAIL PROTECTED], Simon Josefsson writes: Of course, everything fails if you ALSO get your DNSSEC root key from the DHCP server, but in this case you shouldn't expect to be secure. I wouldn't be surprised if some people suggest pushing the DNSSEC root key via DHCP though, because alas, getting the right key into the laptop in the first place is a difficult problem. I can pretty much guarantee that the IETF will never standardize that, except possibly in conjunction with authenticated dhcp. Would this be the DHCP working group that on at least 2 occasions when I was there, insisted that secure DHCP wouldn't require a secret, since DHCP isn't supposed to require configuration? And all I was proposing at the time was username, challenge, MD5-hash response (very CHAP-like). They can configure ARP addresses for security, but having both the user and administrator configure a per host secret was apparently out of the question. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]