Re: The bank fraud blame game
[EMAIL PROTECTED] (Peter Gutmann) writes: (The usage model is that you do the UI portion on the PC, but perform the actual transaction on the external device, which has a two-line LCD display for source and destination of transaction, amount, and purpose of the transaction. All communications enter and leave the device encrypted, with the PC acting only as a proxy. [...] On Sun, 1 Jul 2007, Hal Finney wrote: In theory the TPM was supposed to allow this kind of thing. [...] This was one of the main goals of the TPM as I understood the concept. Unfortunately everyone got focused on the DRM aspect and that largely torpedoed the whole idea. There is a big difference between a TPM providing this kind of service, and Peter's device. The TPM is supposed to be hard-wired into a PC -- so if you are using it to safe your banking applications, you can do banking at one single PC. On the other hand, Peter's device is portable, you can use it to do safe banking from your PC at home, or in the office (only during lunch-breaks with the employer's permission of course), or even at a public internet cafe. To this end, Peter's device would be much more useful for the customer than a TPM ever could be. BTW, Peter, are you aware that your device looks similar to the one proposed in the context of the CAFE project? See http://citeseer.ist.psu.edu/48859.html This has been a more ambitious project, not just supporting secure banking applications at an insecure host PC, but rather a digital wallet. Nevertheless, it may be interesting to study why the project failed (or ended without follow-on projects). I have no quick answer to this question, but as much as I understand, the banks where just not interested in deploying such a device. I guess, it was much too expensive at that time. Instead, in Germany we got the Geldkarte, a simple and very cheap smartcard for payment purposes with neither a display nor a keyboard. The Geldkarte has been around us for about ten years, and, as far as I can tell, hardly any customer is interested in using it. So long -- Stefan Lucks (moved to Bauhaus-University Weimar, Germany) Stefan.Lucks(at)medien.uni-weimar.de -- I love the taste of Cryptanalysis in the morning! -- - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: passphrases with more than 160 bits of entropy
Does anyone have a good idea on how to OWF passphrases without reducing them to lower entropy counts? That is, I've seen systems which hash the passphrase then use a PRF to expand the result --- I don't want to do that. I want to have more than 160 bits of entropy involved. What kind of application are you running, that 150 bits of entropy is insufficient? I was thinking that one could hash the first block, copy the intermediate state, finalize it, then continue the intermediate result with the next block, and finalize that. Is this safe? Is there a better alternative? As I understand your proposal, you split up the passphrase P into L Blocks P_1, ..., P_L, (padding the last block P_L as neccessary) and then you output L*160 bit like this: F(P) = ( H(P_1), H(P_1,P_2), ..., H(P_1, P_2, ..., P_L) ). This does not look like a good idea to me: 1. If the size of the P_i is the internal block size of the hash function (e.g., 512 bit for SHA-1, SHA-256, or RIPEMD-160) and your message P=P_1 is just one block long, you definitively end with (at most) 160 bit of entropy, how large the entropy in P_1 is (could be 512 bit). 2. If the local entropy in each block P_i (or even the conditional entropy in P_i, given all the P_{i-1}, ..., P_1) is low, then you can step by step find P. This function F(P) is thus *much* *worse* than its simple and straight counterpart H(P). 3. In fact, to calculate the entropy of F(P), you can take the conditional entropy in P_i. The entropy of F(P) is close to the maximum of these conditional entropies ... Any better solution I can think of will be significantly less performant than just applying H(P). One idea of mine would be the function G: *Let i be a counter of some fixed size, say 32 bit. *Let J+1 be the number of 160-bit values you need (e.g., J = 4*L). *G(P) = ( H(P_1,0,P_1,P_2,0,P_2, ..., P_L,0,P_L), H(P_2,1,P_2, ..., P_L,1,P_L,P_1,1,P_1), ... H(P_L,J,P_L,P_1,J,P_1, ..., P_{L-1},J,P_{L-1}) ) Would that be OK for you application? In any case, I think that using a 160-bit hash function as a building block for a universal one-way function with (potentially) much more than 160-bit of entropy is a bit shaky. -- Stefan Lucks Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany e-mail: [EMAIL PROTECTED] home: http://th.informatik.uni-mannheim.de/people/lucks/ -- I love the taste of Cryptanalysis in the morning! -- - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Collisions for hash functions: how to exlain them to your boss
Magnus Daum and myself have generated MD5-collisons for PostScript files: http://th.informatik.uni-mannheim.de/people/lucks/HashCollisions/ This work is somewhat similar to the work from Mikle and Kaminsky, except that our colliding files are not executables, but real documents. We hope to demonstrate how serious hash function collisions should be taken -- even for people without much technical background. And to help you, to explain these issues - to your boss or your management, - to your customers, - to your children ... Have fun Stefan -- Stefan Lucks Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany e-mail: [EMAIL PROTECTED] home: http://th.informatik.uni-mannheim.de/people/lucks/ -- I love the taste of Cryptanalysis in the morning! -- - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
, the chances of ever getting independent (and possibly verified) implementations are much better than in the case of TCPA. As I wrote in my response to Carl Ellison's response, one of the main advantages of the Chaum/Pedersen style approach is a clear separation of duties. The TCPA misses this separation, and this is a sign of bad security design. -- Stefan Lucks Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany e-mail: [EMAIL PROTECTED] home: http://th.informatik.uni-mannheim.de/people/lucks/ -- I love the smell of Cryptanalysis in the morning! -- - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]