Re: mother's maiden names...
On Wednesday 13 July 2005 18:29, Mike Owen wrote: > Back in 2000, I opened an account with BofA, and they took a photo of > me, and added it to my debit/check card. Around that same time, > American Express was doing the same with their Costco branded cards. > I'm sure others are doing it, those are just the ones I have > experience with. FYI, that's a feature of Costco, not AmEx. Costco requires a picture because the card is used in place of a normal Costco card to get admitted into the store. They are somewhat ruthless about sharing cards for personal memberships. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: /dev/random is probably not
On Sunday 03 July 2005 05:21, Don Davis wrote: > > From: "Charles M. Hannum" <[EMAIL PROTECTED]> > > Date: Fri, 1 Jul 2005 17:08:50 + > > > > While I have found no fault with the original analysis, > > ...I have found three major problems with the way it > > is implemented in current systems. > > hi, mr. hannum - > > i'm sorry, but none of your three "problems" is substantial. > > > a) Most modern IDE drives... ship with write-behind > >caching enabled. > > i've addressed this caching question quite a bit > over the years. for an early mention of the issue, > please see: > http://www.cs.berkeley.edu/~daw/rnd/disk-randomness > anyway, to deal with caching controllers, any disk rng > needs to discard sub-millisecond access-times, or at > least needs not to count such fast accesses as contributing > any entropy to the RNG's entropy-pool. otherwise, the > rng will tend to overestimate how much entropy is in > the entropy pool, and dev/random will tend to become > no more secure than /dev/urandom. Remember that I specifically stated that I'm talking about problems with real-world implementations, not your original analysis. Unfortunately, a few implementations (FreeBSD's implementation of "Yarrow" and NetBSD's "rnd" come to mind immediately) do not appear to implement the behavior you describe -- they simply always count disk I/O as contributing some entropy (using the minimum of the first-, second- and third-order differentials, which is likely to be non-0, but small and predictable, due to other timing variance). > > b) At least one implementation uses *all* "disk" type > >devices... > > yes, that would be broken., though it's not a total > security loss, as long as the machine has at least one > hard drive. this memory-disk question too was raised > and answered, long ago. Again, this problem exists in real-world implementations. > > By timing how long this higher-level operation (read(), > > or possibly even a remote request via HTTP, SMTP, etc.) > > takes, we can apply an adjustment factor and determine > > with a reasonable probability how long the actual disk > > I/O took. > > this remote-timing approach won't work in any useful way. > > you'd need to get the same timing accuracy as the > /dev/random driver gets; No, you just need to be able to estimate it with a high probability. I don't see any reason this is not possible, given that response times are directly proportional to the interrupt timing. This may be especially bad in implementations such as OpenBSD and NetBSD which limit the precision of the time samples to 1 microsecond. Also, I don't buy for a picosecond that you have to gather "all" timings in order to predict the output. As we know from countless other attacks, anything that gives you some bits will reduce the search space and therefore weaken the system, even if it does not directly give you the result. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
/dev/random is probably not
Most implementations of /dev/random (or so-called "entropy gathering daemons") rely on disk I/O timings as a primary source of randomness. This is based on a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the drive case. I was recently introduced to Don Davis and, being the sort of person who rethinks everything, I began to question the correctness of this methodology. While I have found no fault with the original analysis (and have not actually considered it much), I have found three major problems with the way it is implemented in current systems. I have not written exploits for these problems, but I believe it is readily apparent that such exploits could be written. a) Most modern IDE drives, at least, ship with write-behind caching enabled. This means that a typical write returns a successful status after the data is written into the drive's buffer, before the drive even begins the process of writing the data to the medium. Therefore, if we do not overflow the buffer and get stuck waiting for previous data to be flushed, the timing will not include any air turbulence whatsoever, and should have nearly constant time. b) At least one implementation uses *all* "disk" type devices -- including flash devices, which we expect to have nearly constant time -- for timing. This is obviously a bogus source of entropy. c) Even if we turned off write-behind caching, and so our timings did include air turbulence, consider how a typical application is written. It waits for, say, a read() to complete and then immediately does something else. By timing how long this higher-level operation (read(), or possibly even a remote request via HTTP, SMTP, etc.) takes, we can apply an adjustment factor and determine with a reasonable probability how long the actual disk I/O took. Using any of these strategies, it is possible for us to know the input data to the RNG -- either by measurement or by stuffing -- and, therefore, quite possibly determine the future output of the RNG. Have a nice holiday weekend. [1] D. Davis, R. Ihaka, P.R. Fenstermacher, "Cryptographic Randomness from Air Turbulence in Disk Drives", in Advances in Cryptology -- CRYPTO '94 Conference Proceedings, edited by Yvo G. Desmedt, pp.114--120. Lecture Notes in Computer Science #839. Heidelberg: Springer-Verlag, 1994. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)
On Thursday 09 June 2005 17:37, Charles M. Hannum wrote: > If we assume that the last 4 digits have been exposed somewhere -- and they > usually are -- then this gives you at most 38 bits -- i.e. 2^38 hashes to > test -- to search (even a couple less if you know a priori which *brand* of > card it is). How long do you suppose this would take? On reconsideration, given the presence of the check digit, I think you have at most 2^34 tests (or 2^32 if you know the brand of card). And this assumes there aren't additional limitations on the card numbering scheme, which there always are. I guess you could use a keyed hash. Remember, though, you can't use random padding if this is going to be searchable with a database index, so the amount of entropy you're putting in is pretty limited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)
On Thursday 09 June 2005 16:41, you wrote: > From: "Charles M. Hannum" <[EMAIL PROTECTED]> > > > I can name at least one obvious case where "sensitive" data -- namely > > credit card numbers -- is in fact something you want to search on: credit > > card billing companies like CCbill and iBill. Without the ability to > > search by CC#, customers are pretty screwed. > > Is there a good reason for not searching by the hash of a CC# ? Are you joking? If we assume that the last 4 digits have been exposed somewhere -- and they usually are -- then this gives you at most 38 bits -- i.e. 2^38 hashes to test -- to search (even a couple less if you know a priori which *brand* of card it is). How long do you suppose this would take? (Admittedly, it's pretty sketchy even if you have to search the whole CC# space -- but this is why you need to prevent the data being accessed in any form!) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)
On Wednesday 08 June 2005 21:20, [EMAIL PROTECTED] wrote: > Yes, encrypting indexed columns for example is a problem. But if you > limit yourself to encrypting sensitive information (I'm talking about > stuff like SIN, bank account numbers, data that serves as an index to > external databases and are sensitive with respect to identity theft), > these sensitive information should not be the bases of searches. > If they are not he basis of searches, there will be no performance > problems related to encrypting them. I can name at least one obvious case where "sensitive" data -- namely credit card numbers -- is in fact something you want to search on: credit card billing companies like CCbill and iBill. Without the ability to search by CC#, customers are pretty screwed. That said, I will never buy the "only encrypt sensitive data" argument. In my experience, you *always* end up leaking something that way. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
New cipher used by iTunes
I took a look at the new cipher used in iTunes 4.7, and spent some time reducing it. The algorithm appears to have a similar structure to a 10-round Twofish variant with fixed S-boxes, optimized via precomputed tables. I have not fully analyzed what the permutation matrix and polynomial are, though. There are a couple of strange changes. E.g., they had put the IV mixing between the pre-whitening and post-whitening, but this turned out to effectively cancel out and be equivalent to an altered version with a more traditional CBC structure. I'm including the current working implementation, along with some test vectors, if anyone else wants to take a look at it. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]