Intel microcode update encryption
http://microcodes.sourceforge.net/ There you can find a PDF reviewing the microcode update feature. Apparently the updates from Intel are 2048 bytes long overall, and have a 4-byte checksum, and are "encrypted" using some kind of mechanism on the processor. Since they don't (to my knowledge) express any instructions for doing encryption natively, they likely don't have any just for the microcode update, so it *should* be something simple, relying more on obscurity and the small size of the updates than cryptographic strength. Still, most of the details remain unknown to all but about ten guys in Intel. Writing your own "jump to ring zero" instruction is left as an exercise to the reader. -- "Curiousity killed the cat, but for a while I was a suspect" -- Steven Wright Security Guru for Hire http://www.lightconsulting.com/~travis/ -><- GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: what's wrong with HMAC?
On Tue, 2 May 2006, William Allen Simpson wrote: > I had a preliminary paper showing that the nested N-MAC/H-MAC design was > actually *weaker* than envelope style IP-MAC, [...] But then again, Paul van Oorschot and myself found a key recovery attack on envelope MAC that presents a certificational weakness of envelope MAC as described in RFC 1828 (our Eurocrypt'96 paper). Once a collision is found, one has both forgeries and key recovery, which is not the case for HMAC. I must say that I don't understand this claim: > The basic problem is that the nested method truncates the internal > chaining variables, while the envelope method preserves them and > truncates only upon final output. ...but of course I would like to see your preliminary paper (even after 10 years). What we know now is that keying MDx-type compression functions through the IV/H_i input is more secure than through the message input; this has no immediate implication on the discussion of HMAC/envelope MAC however. I still maintain that I would prefer to key the compression function in both inputs (a la MDx-MAC) - maybe the common sense approach that is better than HMAC and envelope MAC. Finally, I want to strongly defend theoretical analysis to improve the understanding of a scheme; but it is important to understand the model and assumptions of the reduction proof, the implications and limitations of the analysis and not to overclaim. --Bart --- Katholieke Universiteit Leuven Dept. Electrical Engineering-ESAT / COSIC Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, BELGIUM --- On Tue, 2 May 2006, William Allen Simpson wrote: > Hal Finney wrote: > > Travis H. writes: > >> Ross Anderson once said cryptically, > >>> HMAC has a long story attched to it - the triumph of the > >>> theory community over common sense > >> He wouldn't expand on that any more... does anyone have an idea of > >> what he is referring to? > > > > I might speculate, based on what you write here, that he believed that > > the simpler, ad hoc constructions often used in the days preceding > > HMAC were good enough in practice, and that the theoretical proofs of > > security for HMAC were given too much weight. The original HMAC paper > > is at http://www-cse.ucsd.edu/~mihir/papers/kmd5.pdf and the authors > > show in section 6 various attacks on ad hoc constructions, but some of > > them are admittedly impractical. > > > Actually, that paper really describes "version-2" (or even version-N) of > HMAC, as the original design paper had some serious flaws. > > And the other constructions were not so much /ad hoc/ (they had been > proposed by various established security folks with varying amounts of > accompanying math) as *incompletely analyzed*. A part of the problem is > that independent analysis wasn't forthcoming until long after > implementation. The problem wasn't considered enough of a "hot topic" at > the time. > > Another part of the problem was that the publication lag of RFCs was (is) > so ridiculously long. The envelope method published in RFC 1828 was a > variant of the original developed as part of the IPv6 design circa 1993: >key, fill, datagram, key, fill > > but had been replaced circa 1995 by IP-MAC (in Photuris): >key, fill, datagram, fill, key, fill > > yet was not officially published (due to politics) for MD5 until: > * RFC 2522, "Photuris: Session-Key Management Protocol", March 1999. > > and SHA1 even later (took so long it was published as "Historic"): > * RFC 2841, "IP Authentication using Keyed SHA1 with Interleaved Padding >(IP-MAC)", November 2000. > > Filling (padding to the natural block boundary of the algorithm) was/is > accomplished by the usual M-D strengthening technique. > > I had a preliminary paper showing that the nested N-MAC/H-MAC design was > actually *weaker* than envelope style IP-MAC, but at the request of some > colleagues saved it for a book they were putting together. Sadly, that > book was never published. > > The basic problem is that the nested method truncates the internal > chaining variables, while the envelope method preserves them and > truncates only upon final output. > > Of course, AFAICT, the trailing key makes the various recent attacks > on MD5 and SHA1 entirely inapplicable. > -- > William Allen Simpson > Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 > > > - > The Cryptography Mailing List > Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] > --- Katholieke Universiteit Leuven tel. +32 16 32 11 48 Dept. Electrical Engineering-ESAT / COSICfax. +32 16 32 19 69 Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, BELGIU
Re: fyi: Deniable File System - Rubberhose
On 5/2/06, Ivan Krstic <[EMAIL PROTECTED]> wrote: I spent some time thinking about this a few years back: http://diswww.mit.edu/bloom-picayune/crypto/15520 Rubberhose was one of the things that came up, along with StegFS and BestCrypt. Unfortunately, it seems like Rubberhose hasn't seen work in over 5 years. Don't forget http://www.truecrypt.org/ The rubberhose web site disappeared a while back, but you can google and find an archive. I too have a mirror, should that one be out of date. I once ported a crypted file system, and indeed it is quite difficult with monolithic kernels. And you are really putting your data at risk, so be sure to include backups in your implementation. And test those backups, especially if you are backing up the crypted image, as opposed to encrypting your backups. -- "Curiousity killed the cat, but for a while I was a suspect" -- Steven Wright Security Guru for Hire http://www.lightconsulting.com/~travis/ -><- GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: encrypted file system issues (was Re: PGP "master keys")
[A bit off topic but I thought I'd let it through anyway. Those uninterested in OS design should skip the rest of this message. --Perry] On 5/1/06, [EMAIL PROTECTED] (Perry E. Metzger) wrote: >Disk encryption systems like CGD work >on the block level, and do not propagate CBC operations across blocks, >so if the atomic disk block write assumption is correct (and almost >all modern file systems operate on that assumption), you have no more >real risk of corruption than you would in any other application. I haven't seen the failure specs on modern disk systems, but the KeyKOS developers ran into an interesting (and documented) failure mode on IBM disks about 20 years ago. Those IBM systems connected disks to a "controller" which was connected to a "channel" which was a specialized processor with DMA access to the main storage of the system. Note that these systems were designed in the days when memory was expensive, so there was an absolute minimum of buffering in the channel, controller, and disk. There are many possible failure modes, including power failure on the individual components, hardware failure/microprogram failure in the components, etc. The failure we experienced was a microcode hang in the channel (probably caused by a transient hardware failure), which also stopped the CPU. The failure occurred while the controller and disk was writing a block, and the channel ceased providing data. The specification for the controller was if the channel failed to provide data, it filled the block with the last byte received from the channel. If the channel and CPU had been running, the overrun would have been reported back to the OS with an interrupt. As it was, all we had was a partially klobbered disk block. Since KeyKOS was supposed to be a high reliability OS, we needed to code for this situation. Because of the design of the disk I/O system, there were only two disk blocks (copies of each other) where this kind of failure could cause a problem. We defined the format of these blocks so the last two bytes were 0xFF00. By checking for this pattern, we could determine if the block has been partially klobbered. We then had to ensure that we checked for correct write on one of the blocks before starting to write the other. Does anyone have any idea how modern disks and computers handle similar situations? Cheers - Bill --- Bill Frantz| gets() remains as a monument | Periwinkle (408)356-8506 | to C's continuing support of | 16345 Englewood Ave www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: what's wrong with HMAC?
Hal Finney wrote: Travis H. writes: Ross Anderson once said cryptically, HMAC has a long story attched to it - the triumph of the theory community over common sense He wouldn't expand on that any more... does anyone have an idea of what he is referring to? I might speculate, based on what you write here, that he believed that the simpler, ad hoc constructions often used in the days preceding HMAC were good enough in practice, and that the theoretical proofs of security for HMAC were given too much weight. The original HMAC paper is at http://www-cse.ucsd.edu/~mihir/papers/kmd5.pdf and the authors show in section 6 various attacks on ad hoc constructions, but some of them are admittedly impractical. Actually, that paper really describes "version-2" (or even version-N) of HMAC, as the original design paper had some serious flaws. And the other constructions were not so much /ad hoc/ (they had been proposed by various established security folks with varying amounts of accompanying math) as *incompletely analyzed*. A part of the problem is that independent analysis wasn't forthcoming until long after implementation. The problem wasn't considered enough of a "hot topic" at the time. Another part of the problem was that the publication lag of RFCs was (is) so ridiculously long. The envelope method published in RFC 1828 was a variant of the original developed as part of the IPv6 design circa 1993: key, fill, datagram, key, fill but had been replaced circa 1995 by IP-MAC (in Photuris): key, fill, datagram, fill, key, fill yet was not officially published (due to politics) for MD5 until: * RFC 2522, "Photuris: Session-Key Management Protocol", March 1999. and SHA1 even later (took so long it was published as "Historic"): * RFC 2841, "IP Authentication using Keyed SHA1 with Interleaved Padding (IP-MAC)", November 2000. Filling (padding to the natural block boundary of the algorithm) was/is accomplished by the usual M-D strengthening technique. I had a preliminary paper showing that the nested N-MAC/H-MAC design was actually *weaker* than envelope style IP-MAC, but at the request of some colleagues saved it for a book they were putting together. Sadly, that book was never published. The basic problem is that the nested method truncates the internal chaining variables, while the envelope method preserves them and truncates only upon final output. Of course, AFAICT, the trailing key makes the various recent attacks on MD5 and SHA1 entirely inapplicable. -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: fyi: Deniable File System - Rubberhose
Owen Blacker wrote: > I wanted to create a file system that was > deniable: where encrypted files looked like random noise, and where it > was impossible to prove either the existence or non-existence of > encrypted files. I spent some time thinking about this a few years back: http://diswww.mit.edu/bloom-picayune/crypto/15520 Rubberhose was one of the things that came up, along with StegFS and BestCrypt. Unfortunately, it seems like Rubberhose hasn't seen work in over 5 years. -- Ivan Krstic <[EMAIL PROTECTED]> | GPG: 0x147C722D - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: what's wrong with HMAC?
Travis H. writes: > Ross Anderson once said cryptically, > > HMAC has a long story attched to it - the triumph of the > > theory community over common sense > > He wouldn't expand on that any more... does anyone have an idea of > what he is referring to? I might speculate, based on what you write here, that he believed that the simpler, ad hoc constructions often used in the days preceding HMAC were good enough in practice, and that the theoretical proofs of security for HMAC were given too much weight. The original HMAC paper is at http://www-cse.ucsd.edu/~mihir/papers/kmd5.pdf and the authors show in section 6 various attacks on ad hoc constructions, but some of them are admittedly impractical. Hal Finney - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]