Hey, In "Practical Cryptography", Schneier mentions a couple of general principles that he thinks wise when writing code which uses or implements cryptographic routines.
Bear with me as I try to remember them: 1) When using a user input, run it through a OWF first. NB: This is a possible DoS vector. 2) When using a cryptographic hash with the extension-collision problem (of which the most popular and all I know suffer), use it twice to eliminate the problem. 3) Authenticate the plaintext, not the ciphertext. This is a general application of the rule "use semantically appropriate constructs". That is, our point in signing is to authenticate the plaintext, not an encrypted version of it. This has the drawback that decryption must occur before authentication, which is a possible DoS vector. 4) Try to eliminate security dependencies between modules. That is, when you read from the RNG subsystem, check the values to see if they indicate a broken RNG (let's avoid pointing out the obvious problem with this by saying that all outputs are equally likely, and thus no sequence is more "random" than any other, in his code he just aborts a probabilistic algorithm if it fails too many times). 5) When there is an indication that the system is being attacked/abused, abort with no indicator to the attacking party because that can help the attacker (closed system feedback). If absolutely unavoidable, send back a generic error message (think "Syntax Error"). 6) When pumping events into a RNG subsystem, uniquely identify them. That is, the input to the RNG subsystem should be unambiguous. This can be accomplished by including a source identifier and a timestamp on the event data. Ambiguity in input means that two different event streams have the same representation and thus will hash to the same thing, reducing the input space that an attacker would have to search. 7) Use assert checks, and leave your assertions in the binary you ship! Turning assertions off in the public release is like taking out the seat belts of a car before handing the keys to a customer. In the lab, a failed assertion means more debugging, but in the field, a failed assertion means the software is in an inconsistent state and thus cannot give a correct answer except by chance. Far better to die loudly than fail silently, a failure whose cost is essentially unbounded. 8) I'm not sure if he mentioned this one, but I did; try to keep the output of one part of the program from being used in another, potentially related part. That is, use random padding, or pad user input with a unique identifier for that part of the program before hashing and using it. Then there are a few I'm not sure I understand: A) Allow a factor of two in strength increase. That is, if you generate and use 1024 bit keys in the application, allow interoperability with 512-2048 bits. I think this violates the interoperability rule "be liberal in what you accept and conservative in what you generate". In particular, some of my sshd's don't accept 4096-bit keys, which annoys me greatly every time I try to ssh to them and it barfs into syslog. That's all I remember right now.... any other advice people can think of? -- "Whosoever is delighted in solitude is either a wild beast or a god." -><- http://www.lightconsulting.com/~travis/ GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]