Re: [Cryptography] Key stretching

2013-10-12 Thread William Allen Simpson

On 10/11/13 7:34 PM, Peter Gutmann wrote:

Phillip Hallam-Baker hal...@gmail.com writes:


Quick question, anyone got a good scheme for key stretching?


http://lmgtfy.com/?q=hkdfl=1


Yeah, that's a weaker simplification of the method I've always
advocated, stopping the hash function before the final
MD-strengthing and repeating the input, only doing the
MD-strengthening for the last step for each key.  I used this in
many of my specifications.

In essence, the MD-strengthening counter is the same as the 0xnn
counter they used, although longer and stronger.

This assures there are no releated key attacks, as the internal
chaining variables aren't exposed.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 6:00 AM, Alexandre Anzala-Yamajako wrote:

Chacha20 being  a stream cipher, the only requirement we have on the ICV is 
that it doesn't repeat isn't ?


You mean IV, the Initialization Vector.  ICV is the Integrity Check Value,
usually 32-64 bits appended to the packet.  Each is separately keyed.



This means that if there's a problem with setting 'mostly zeroed out' ICV for 
Chacha20 we shouldn't use it at all period.


I strongly disagree.  In my network protocol security designs, I always
try to think about weaknesses in the implementation and potential future
attacks on the algorithm -- and try to strengthen the security margin.

For example, IP-MAC fills every available zero space with randomness,
while H-MAC (defined more than a year later) uses constants instead.
IP-MAC was proven stronger than H-MAC.

Sadly, in the usual standards committee-itis, newer is often assumed to
be improved and better.  So H-MAC was adopted instead.  Of course, we
know that H-MAC was chosen by an NSA mole in the IETF, so I don't trust it.

Also, there's a certain silliness in formal cryptology that assumes we
shouldn't have longer randomness keying than the formal strength of the
algorithm.  That might have been true in the days of silk and cyanide,
where keying was a hard problem, but modern computing can generate lots of
longer nonces without much effort.

In reality, adding longer nonces may not improve the strength of the
algorithm itself, but it improves the margin against attack.  A nearly
practical attack of order 2**80 could be converted to an impractical
attack of order 2**96



As far as your proposition is concerned, the performance penalty seems to 
largely depend on the target platform. Wouldn't using the same set of 
operations as Chacha prevent an unexpected performance drop in case of lots of 
short messages ?


I don't understand this part of your message.  My ancient CBCS
formulation that I'll probably use for PPP (Xor'ing a per-session key
with a per-packet unique value) is demonstrably much faster than using
ChaCha itself to do that same thing.

We've been using stream ciphers and pseudo-stream ciphers (made by
chaining MACs or chaining block ciphers) to create per-packet nonces
for as long as I can remember (over 20 years).  You'll see that in CHAP
and Photuris and CBCS.

So I'm not arguing with Adam's use of ChaCha for it.  It just bugs me
that we aren't filling in as much randomness as we could!

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 10:27 AM, Adam Langley wrote:

[attempt two, because I bounced off the mailing list the first time.]

On Tue, Sep 10, 2013 at 9:35 PM, William Allen Simpson
william.allen.simp...@gmail.com wrote:

Why generate the ICV key this way, instead of using a longer key blob
from TLS and dividing it?  Is there a related-key attack?


The keying material from the TLS handshake is per-session information.
However, for a polynomial MAC, a unique key is needed per-record and
must be secret.


Thanks, this part I knew, although it would be good explanatory text to
add to the draft.

I meant a related-key attack against the MAC-key generated by TLS?

Thereby causing you to discard it and not key the ICV with it?


Using stream cipher output as MAC key material is a
trick taken from [1], although it is likely to have more history than
that. (As another example, UMAC/VMAC runs AES-CTR with a separate key
to generate the per-record keys, as did Poly1305 in its original
paper.)


Oh sure.  We used hashes long ago.  Using AES is insane, but then
UMAC is -- to be kind -- not very efficient.

My old formulation from CBCS was developed during the old IPsec
discussions.  It's just simpler and faster to xor the per-packet counter
with the MAC-key than using the ChaCha cipher itself to generate
per-packet key expansion.

I was simply wondering about the rationale for doing it yourself.  And
worrying a little about the extra overhead on back-to-back packets.



If AEAD, aren't the ICV and cipher text generated in parallel?  So how do
you check the ICV first, then decipher?


The Poly1305 key (ICV in your terms?) is taken from a prefix of the
ChaCha20 stream output. Thus the decryption proceeds as:

1) Generate one block of ChaCha20 keystream and use the first 32 bytes
as a Poly1305 key.
2) Feed Poly1305 the additional data and ciphertext, with the length
prefixing as described in the draft.
3) Verify that the Poly1305 authenticator matches the value in the
received record. If not, the record can be rejected immediately.
4) Run ChaCha20, starting with a counter value of one, to decrypt the
ciphertext.


ICV = Integrity Check Value at the end of the packet.  So ICV-key.
Sometimes MAC-key.

Anyway, good explanation!  Please add it to the draft.



An alternative implementation is possible where ChaCha20 is run in one
go on a buffer that consists of 64 zeros followed by the ciphertext.
The advantage of this is that it may be faster because the ChaCha20
blocks can be pipelined. The disadvantage is that it may need memory
copies to setup the input buffer correctly. A moot advantage, in the
case of TLS, of the steps that I outlined is that forgeries are
rejected faster.


Depends on how swamped the processor.  I'm a big fan of rejecting
forgeries (and replay attacks) before decrypting.  Not everybody is
Google with unlimited processing power. ;)



Needs a bit more implementation details.  I assume there's an
implementation in the works.  (Always helps define things with
something concrete.)


I currently have Chrome talking to OpenSSL, although the code needs
cleanup of course.


Excellent

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 10:37 AM, Adam Langley wrote:

On Tue, Sep 10, 2013 at 10:59 PM, William Allen Simpson
william.allen.simp...@gmail.com wrote:

Or you could use 16 bytes, and cover all the input fields  There's no
reason the counter part has to start at 1.


It is the case that most of the bottom row bits will be zero. However,
ChaCha20 is assumed to be secure at a 256-bit security level when used
as designed, with the bottom row being counters. If ChaCha/Salsa were
not secure in this formulation then I think they would have to be
abandoned completely.


I kinda covered this in a previous message.  No, we should design with
the expectation that there's something wrong with every cipher (and
every implementation), and strengthen it as best we know how.

It's the same principle we learned (often the hard way) in school:
 * Software designers, assume the hardware has intermittent failures.
 * Hardware designers, assume the software has intermittent failures.



Taking 8 bytes from the initial block and using it as the nonce for
the plaintext encryption would mean that there would be a ~50% chance
of a collision after 2^32 blocks. This issue affects AES-GCM, which is
why the sequence number is used here.


Sorry, you're correct there -- my mind is often still thinking of DES
with its unicity distance of 2**32, so you had to re-key anyway.



Using 16 bytes from the initial block as the full bottom row would
work, but it still assumes that we're working around a broken cipher
and it prohibits implementations which pipeline all the ChaCha blocks,
including the initial one. That may be usefully faster, although it's
not the implementation path that I've taken so far.


OK.  I see the pipeline stall.  But does poly1305 pipeline anyway?



There is an alternative formulation of Salsa/ChaCha that is designed
for random nonces, rather than counters: XSalsa/XChaCha. However,
since we have a sequence number already in TLS I've not used it.


Aha, I hadn't found this (XSalsa, there doesn't seem to be an XChaCha).
Good reading, and some of the same points I was trying to make here.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-10 Thread William Allen Simpson

It bugs me that so many of the input words are mostly zero.  Using the
TLS Sequence Number for the nonce is certainly going to be mostly zero
bits.  And the block counter is almost all zero bits, as you note,

   (In the case of the TLS, limits on the plaintext size mean that the
   first counter word will never overflow in practice.)

Heck, since the average IP packet length is 43, the average TLS record
is likely shorter than that!  At least half the connection directions,
it's going to be rare that the counter itself exceeds 1!

In my PPP ChaCha variant of this that I started several months ago, the
nonce input words were replaced with my usual CBCS formulation.  That is,
   invert the lower 32-bits of the sequence number,
   xor with the upper 32-bits,
   add (mod 2**64) both with a 64-bit secret IV,
   count the bits, and
   variably rotate.

This gives more diffusion, at least 2 bits change for every packet,
ensure a bit changes in the first 32-bits (highly predictable and
vulnerable), and varies the bits affected among 64 positions.

Note that I use a secret IV, a cipher key, and an ICV key for CBCS.

However, to adapt your current formulation for making your ICV key,

   ChaCha20 is run with the given key and nonce and with the two counter
   words set to zero.  The first 32 bytes of the 64 byte output are
   saved to become the one-time key for Poly1305.  The remainder of the
   output is discarded.

I suggest:

   ChaCha20 is run with the given key and sequence number nonce and with
   the two counter words set to zero.  The first 32 bytes of the 64 byte
   output are saved to become the one-time key for Poly1305.  The next 8
   bytes of the output are saved to become the per-record input nonce
   for this ChaCha20 TLS record.

Or you could use 16 bytes, and cover all the input fields  There's no
reason the counter part has to start at 1.

Of course, this depends on not having a related-key attack, as mentioned
in my previous message.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread William Allen Simpson

Nicolas Williams wrote:

Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.


I agree.  Let's get something deployed, as that will lead to testing.



If 90 days for the 1024-bit ZSKs is too long, that can always be
reduced, or the ZSK keylength be increased -- we too can squeeze factors
of 10 from various places.  In the early days of DNSSEC deployment the
opportunities for causing damage by breaking a ZSK will be relatively
meager.  We have time to get this right; this issue does not strike me
as urgent.


One of the things that bother me with the latest presentation is that
only dummy keys will be used.  That makes no sense to me!  We'll have
folks that get used to hitting the Ignore key on their browsers

http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf

Thus, I'm not sure we have time to get this right.  We need good keys, so
that user processes can be tested.



OTOH, will we be able to detect breaks?  A clever attacker will use
breaks in very subtle ways.  A ZSK break would be bad, but something
that could be dealt with, *if* we knew it'd happened.  The potential
difficulty of detecting attacks is probably the best reason for seeking
stronger keys well ahead of time.


Agreed.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-16 Thread William Allen Simpson

Perry E. Metzger wrote:
[Snip admirably straightforward threat and requirements analysis]



Yes, you can attempt to gather randomness at run time, but there are
endless ways to screw that up -- can you *really* tell if your random
numbers are random enough? -- and in a cheap device with low
manufacturing tolerances, can you really rely on how consistent things
like clock skew will be?


Given the previous discussion on combining entropy, it shouldn't hurt, as
long as it's testable during manufacture.



Lets contrast that with AES in counter mode with a really good factory
installed key. It is trivial to validate that your code works
correctly (and do please do that!) It is straightforward to build a
device to generate a stream of good AES key at the factory, and you
need only make sure that one piece of hardware is working correctly,
rather than all the cheap pieces of hardware you're churning out.


Ah, here's the rub.  I like this testing requirement.  The recent FreeBSD
Security Advisory was merely a simple failure of initialization -- yet
wasn't caught for the longest time, because it wasn't readily testable.



One big issue might be that if you can't store the counter across
device resets, you will need a new layer of indirection -- the obvious
one is to generate a new AES key at boot, perhaps by CBCing the real
time clock with the permanent AES key and use the new key in counter
mode for that session.


As long as the testing procedure validates the key and key+RTC separately.



This does necessitate an extra manufacturing step in which the device
gets individualized, but you're setting the default password to a
per-device string and having that taped to the top of the box anyway,
right? If you're not, most of the boxes will be vulnerable anyway and
there's no point...


Recently, I was pleasantly surprised that the ATT U-verse box had this!
Unlike the ATT 2wire boxes we were installing just this summer.

If we could only get Linksys, et alia on board

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES HDD encryption was XOR

2008-12-08 Thread William Allen Simpson

Jerry Leichter wrote:

...
accurately states that AES-128 is thought to be secure within the state 
of current and expected cryptographic knowledge, it propagates the meme 
of the short key length of only 128 bits.  A key length of 128 bits is 
beyond any conceivable brute force attack - in and of itself the only 
kind of attack for which key length, as such, has any meaning.  But, as 
always, bigger *must* be better - which just raises costs when it 
leads people to use AES-256, but all too often opens the door for the 
many snake-oil super-secure cipher systems using thousands of key bits.



Oh, say it ain't so! ;-)

In the NBC TV episode of /Chuck/ a couple of weeks ago, the NSA cracked
a 512-bit AES cipher on a flash drive trying every possible key.
Could be hours, could be days.  (Only minutes in TV land.)

http://www.nbc.com/Chuck/video/episodes/#vid=838461
(Chuck Versus The Fat Lady, 4th segment, at 26:19)

It's no wonder that folks are deluded, pop culture reinforces this.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-10 Thread William Allen Simpson

James A. Donald wrote:

Peter Gutmann wrote:
Unfortunately I think the only way it (and a pile of other things as 
well) may get stamped out is through a multi-pronged approach that 
includes legislation, and specifically properly thought-out 
requirements 



I agree.   I'm sure this is a world-wide problem, and head-in-the-sand
cyber-libertarianism has long prevented better solutions.  The market
doesn't work for this, as there is a competitive *disadvantage* to
providing improved security, and it's hard to quantify safety.

Remember automotive seat-belts?  Air-bags?  Engineers developed them,
but the industry wouldn't deploy because the market failed to demand
safety.  That is, long-term safety would cut into short-term profits.

The corporate world actually led the public to believe (through
advertising) that they were sufficiently safe without them.  Only
legislation and regulation resulted in measurably greater safety.

M$ has long advertised (falsely) that safety was their concern, and their
systems were already safe.  We all know how that worked out


The average cryptographic expert finds it tricky to set up something 
that is actually secure.  The average bureaucrat could not run a pie 
stand.  Legislation and so forth requires wise and good legislators and 
administrators, which is unlikely.



So, what campaigns are you working on currently to improve this?

I've educated dozens of U.S. legislators over the years  Indeed, the
original funding for my NSFnet work 20 years ago was funded by the Michigan
House Fiscal Agency, and my early IETF work was funded by the Levin (Senate)
and Carr (House) campaigns.


Visualize Obama, McCain, or Sarah Palin setting up your network 
security.  Then realize that whoever they appoint as Czar in charge of 
network security is likely to be less competent than they are.



The problem, as always, is enough folks that are competent in both
computer security *and* political action.

Cannot say much about McCain/Palin, but the Obama folks have been fairly
computer literate from the beginning.  Not always as security conscious as
I'd like, but some seem to be receptive.  Unlike McCain (who needs help to
get his email), Obama himself seems from reports to be tech-savvy.

We either have to educate more political folks about computer security,
or more security folks have to become active in politics.  The former is
the never-ending long-term problem, while the latter is an effective
sort-term solution.

At the IETF, we used to have a t-shirt, with 9 layers instead of 7.  The
top was Political, with you are here next to it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the unpredictability of DNS

2008-08-09 Thread William Allen Simpson

It seems like enough time has passed to post publicly, as some of these
are now common knowledge:

Ben Laurie wrote:

William Allen Simpson wrote:

Keep in mind that the likely unpredictability is about 2**24.  In many
or most cases, that will be implementation limited to 2**18 or less.


Why?


Remember, this is the sum of the range of the 16-bit DNS header identifier
and the 16-bit UDP port number.

The theoretical maximum is less than 2**(16+16), as the ports less than
4096 are reserved.

Many or most implementations use only a pool of ports: 2**[9, 10, 12]
have been reported.

Some implementations (incorrectly) use positive signed integers for the
DNS identifier: 2**15.

And in this week's Hall of Shame, MacOSX Leopard patch for servers seems
to randomize the BIND request port in a small pool.  The next UDP packet
ports are sequential, so the range to guess is very small, simply by
looking at the following UDP packets.  Very strange reports coming out!
MacOSX total: about 2**18.

Worse, Apple didn't fix Leopard clients, didn't patch the stub resolver
library (neither did BIND), didn't patch earlier versions such as Panther.
Many, many MacOS systems are still vulnerable.

And in case you don't think this matters, once upon a time I helped build
an ISP entirely with Macs, resistant to most compromises.  There are far
more Macs used as resolvers than any other flavor of *nix.


I don't see why. A perfectly reasonable threat is that the attacker 
reverse engineers the PRNG (or just checks out the source). It doesn't 
need to be common to be predictable.



I don't understand this comment.

When MD5 is used as a PRNG, in this case the upper 32-bits of its 128-bit
output cycle, what amount of samples will reveal the seed, or the current
internal state of the sequence?

When ARC4 is used as PRNG, what amount of samples will reveal the seed or
the current state?

Are you only referring to reverse engineering trivially poor PRNG?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the unpredictability of DNS

2008-07-31 Thread William Allen Simpson

I've changed the subject.  Some of my own rants are about mathematical
cryptographers that are looking for the perfect solution, instead of
practical security solution.  Always think about the threat first!

In this threat environment, the attacker is unlikely to have perfect
knowledge of the sequence.  Shared resolvers are the most critical
vulnerability, but the attacker isn't necessarily in the packet path, and
cannot discern more than a few scattered numbers in the sequence.  The
more sharing (and greater impact), the more sparse the information.

In any case, the only perfect solution is DNS-security.  Over many
years, I've given *many* lectures to local university, network, and
commercial institutions about the need to upgrade and secure our zones.
But the standards kept changing, and the roots and TLDs were not secured.

Now, the lack of collective attention to known security problems has
bitten us collectively.

Never-the-less, with rephrasing, Ben has some good points

Ben Laurie wrote:
But just how GREAT is that, really? Well, we don't know. Why? 
Because there isn't actually a way test for randomness. ...


While randomness is sufficient for perfect unpredictability, it isn't
necessary in this threat environment.

Keep in mind that the likely unpredictability is about 2**24.  In many
or most cases, that will be implementation limited to 2**18 or less.


Your DNS resolver could be using some easily predicted random number 
generator like, say, a linear congruential one, as is common in the 
rand() library function, but DNS-OARC would still say it was GREAT.


In this threat environment, a better test would be for determination of a
possible seed for any of several common PRNG.  Or lack of PRNG.

How many samples would be needed?  That's the mathematical limitation.

Is it less than 2**9 (birthday attack on 2**18)?


It is an issue because of NAT. If your resolver lives behind NAT (which 
is probably way more common since this alert, as many people's reactions 
[mine included] was to stop using their ISP's nameservers and stand up 
their own to resolve directly for them) and the NAT is doing source port 
translation (quite likely), then you are relying on the NAT gateway to 
provide your randomness. But random ports are not the best strategy for 
NAT. They want to avoid re-using ports too soon, so they tend to use an 
LRU queue instead. Pretty clearly an LRU queue can be probed and 
manipulated into predictability.



Agreed!  All my tests of locally accessible NATs (D-Link and Linksys) show
that the sequence is fairly predictable.  And no code updates available


Incidentally, I'm curious how much this has impacted the DNS 
infrastructure in terms of traffic - anyone out there got some statistics?



Some are coming in on another private security list where I and some
others here are vetted, but everything is very preliminary.

In addition to the publicized attacks on major ISP infrastructure, there
are verified scans and attacks against end user home NATs.


Oh, and I should say that number of ports and standard deviation are not 
a GREAT way to test for randomness. For example, the sequence 1000, 
2000, ..., 27000 has 27 ports and a standard deviation of over 7500, 
which looks pretty GREAT to me. But not very random.



Again, the question is not randomness, but unpredictability.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG for Padding

2008-03-16 Thread William Allen Simpson

We had many discussions about this 15 years ago

You usually have predictable plaintext.  A cipher that isn't strong enough
against a chosen/known plaintext attack has too many other protocol
problems to worry about mere padding!

For IPsec, we originally specified random padding with 1 trailing byte of
predictable trailing plaintext (the amount of padding). Together with the
(encapsulated) protocol number, that actually made 2 bytes of predictable
trailing plaintext.

Due to my work in other groups, everything that I've specified afterward
uses self-describing-padding.  That is, the last byte indicates how much
padding (just as before), but each byte of the padding indicates its
position in the padding sequence.

0 ::= never used.
1 ::= 1 byte of padding (itself)
1 2 ::= 2 bytes of padding
...

The original impetus was hardware manufacturers of in-line cipher devices,
that don't usually have a good source of randomness.

Also, this provides a modest amount of integrity protection.  After
decryption, the trailing padding must be the correct sequence.  Of course,
this should be in addition to integrity protection over the whole packet!

Additionally, this avoids a possible covert channel for compromised data,
whether by accident (revealing a poor RNG or the current state of the RNG),
or trojan process communication.  Note that I've said avoids, as varying
the amount of padding would give a lower bandwidth channel for the latter.

When designing, it's always best to defend in depth.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-10 Thread William Allen Simpson

Francois Grieu wrote:

That's because if Tn is known (including chosen) to some person,
then (due to the weakness in MD5 we are talking about), she can
generate Dp and Dp' such that
  S( MD5(Tn || Dp || Cp || Cn) ) = S( MD5(Tn || Dp' || Cp || Cn) )
whatever Cp, Cn and S() are.


First of all, the weakness in MD5 (computational feasibility over time)
that we are talking about is not (yet) a preimage or second preimage
attack.  Please don't extrapolate your argument.

Second of all, you need to read my messages more carefully.  No good
canonical format allows random hidden fields or images.

Third of all, that's not a weakness of a notary protocol -- it's a trap!
The whole point of a notary is to bind a document to a person.  That the
person submitted two or more different documents at different times is
readily observable.  After all, the notary has the document(s)!

Remember, the notary is not vouching for the validity of the content of
the document.  A notary only certifies that something was submitted by
some person at some time.  And that cannot be broken by making multiple
submissions, or submissions that themselves have the same hash.

That's one reason I'm much more interested in the attack on X.509.



If Tn was hashed after Dp rather than before, poof goes security.


But since it's not, that's a ridiculous strawman.  I was remembering PGP
off the top of my head.  Fairly certain that Kerberos does, too.  Not
everybody is naive!

And since the timestamp is predictable (within some range, although
picoseconds really aren't very predictable), the protocols that I've
designed include message identifiers, nonces, and sequence numbers, too.

As you may recall, I mentioned that there were other fields

He asked for an explanation about how a document is identified, he got
one.  Don't expect me to redesign an entire notary (or even a timestamp)
protocol on a Sunday evening for a mailing list  Really, there are
fairly secure standards already available.

However, the actual topic of this thread is code distribution.  In that
case, there is no other party certifying the documents.  The code
packager is also the certifier.  There is (as yet) no weakness in the
MD4 family (including MD5 and SHA1) that allows this attack by another
party.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-09 Thread William Allen Simpson

Personally, I thought this horse was well drubbed, but the moderator let
this message through, so he must think it important to continue

James A. Donald wrote:

William Allen Simpson wrote:
  The notary would never sign a hash generated by
  somebody else.  Instead, the notary generates its own
  document (from its own tuples), and signs its own
  document, documenting that some other document was
  submitted by some person before some particular time.

And how does it identify this other document?


Sorry, obviously I incorrectly assumed that we're talking to somebody
skilled in the art

Reminding you that several of us have told you that a notary has the
document in her possession; and binds the document to a person; and that
we have rather a lot of experience in identifying documents (even for
simple things like email), such as the PGP digital timestamping service.

Assuming,
  Dp := any electronic document submitted by some person, converted to its
canonical form
  Cp := a electronic certificate irrefutably identifying the other person
submitting the document
  Cn := certificate of the notary
  Tn := timestamp of the notary
  S() := signature of the notary

  S( MD5(Tn || Dp || Cp || Cn) ).

Of course, I'm sure the formula could be improved, and there are
traditionally fields identifying the algorithms used, etc. -- or something
else I've forgotten off the top of my head -- but please argue about the
actual topic of this thread, instead of incessant strawmen.



The notary is only safe from this flaw in MD5 if you


Another statement with no proof.  As the original poster admitted, there is
not a practical preimage or second preimage attack on MD5 (yet).


assume he is not using MD5 for its intended purpose.


As to its intended purpose, rather than making one up, I've always relied
upon the statement of the designer:

   ... The MD5
   algorithm is intended for digital signature applications, where a
   large file must be compressed in a secure manner before being
   encrypted with a private (secret) key under a public-key cryptosystem
   such as RSA.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-03 Thread William Allen Simpson

James A. Donald wrote:

 Not true.  Because they are notarizing a signature, not
a document, they  check my supporting identification,
but never read the document being signed.


This will be my last posting.  You have refused several requests to stick
to the original topic at hand.

Apparently, you have no actual experience with the legal system, or
are from such a different legal jurisdiction that your scenario is
somehow related to MD5 hashes of software and code distribution.

Because human beings often try to skirt the rules, there's a long
history of detailed notarization requirements.  How it works here:

(1) You prepare the document(s).  They are in the form prescribed by law
-- for example, Michigan Court Rule (MCR 2.114) SIGNATURES OF ATTORNEYS
AND PARTIES; VERIFICATION; EFFECT; SANCTIONS

(2) The clerk checks for the prescribed form and content.

(3) You sign and date the document(s) before the notary (using a pen
supplied by the notary, no disappearing ink allowed).

(4) The notary signs and dates their record of your signature, optionally
impressing the document(s) with an embossing stamp (making it physically
difficult to erase).

You have now attested to the content of the documents, and the notary has
attested to your signature (not the veracity of the documents).  Note
that we get both integrity and non-repudiation

The only acceptable computer parallel would require you to bring the
documents to the notary, using a digital format supplied by the notary,
generate the digital signature on the notary's equipment, and then the
notary indempotently certify your signature (on the same equipment).

In the real world, the emphasis is on binding a document to a person, and
vice versa.  Any digital system that does not tie the physical person to
the virtual document is not equivalent.

This is simply not equivalent to a site producing its own software and
generating a hash of its own content.  There should be no third party
involved as a certifier.



If they were to generate an MD5 hash of documents
prepared by someone else, then the attack described
(eight different human readable documents with the same
MD5 hash) works.


If a notary were to do that, they'd be looking at a fairly severe penalty.
By definition, such a notary was compromised.

But nothing like the prison sentence that you'd be facing for presenting
the false documents to the court.  And I'd be pushing the prosecutor for
consecutive sentences for all 8 fraudulent documents, with enhancements.

Nobody has given any examples of human readable documents that will
produce the same hash when re-typed into the system.  All those proposed
require an invisible component.  They are machine readable only.

That's why we, as security analysts, don't design or approve such systems.
We're not (supposed to be) fooled by parlor tricks.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-03 Thread William Allen Simpson

Weger, B.M.M. de wrote:

See http://www.win.tue.nl/hashclash/TargetCollidingCertificates/
...
Our first chosen-prefix collision attack has complexity of about
2^50, as described in our EuroCrypt 2007 paper. This has been 
considerably improved since then. In the full paper that is in

preparation we'll give details of those improvements.


Much more interesting.  Looks like the death knell of X.509.  Why
didn't you say so earlier?

(It's a long known design flaw in X.509 that it doesn't provide
integrity for all its internal fields.)

Where are MD2, MD4, SHA1, and others on this continuum?

And based on the comments in the page above, the prefix is quite large!
Optimally, shouldn't it be = the internal chaining variables?  512 bits
for MDx.  So, the attacks need two values for comparison: the complexity
versus the length of the chosen prefix.

Let me know when you get the chosen prefix down to 64-bits, so I can say
I told you so to Bellovin again.  I was strongly against adding the
random IV field to IPsec

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-02 Thread William Allen Simpson

James A. Donald wrote:

This attack does not require the certifier to be
compromised.


You are referring to a different page (that I did not reference).
Never-the-less, both attacks require the certifier to be compromised!



 The attack was to generate a multitude of predictions
for the US election, each of which has the same MD5
hash.  If the certifier certifies any one of these
predictions, the recipient can use the certificate for
any one of these predictions.


That's a mighty big if -- as in infinite improbability.  Therefore, a
parlor trick, not cryptography.

There are no circumstances in which any reputable certifier will ever
certify any of the multitude containing a hidden pdf image, especially
where generated by another party.

The attack requires the certifier to be compromised, either to certify
documents that the certifier did not generate, or to include the chosen
text (hidden image) in its documents in exactly the correct location.

While there are plenty of chosen text attacks in cryptography, this one
is highly impractical.  The image is hidden.  It will not appear, and thus
would not be accidentally copied by somebody (cut-and-paste).

The parlor trick demonstrates a weakness of the pdf format, not MD5.



This attack renders MD5 entirely worthless for any use
other than as an error check like CRC - and CRC does it
better and faster.


To be as weak as CRC, the strength would be 2**8.  I've seen no papers
that reduce MD5 complexity to 2**8.

Please present your proofs and actual vulnerabilities, including specific
examples of actual PPP CHAP compromised traffic -- and for extra credit,
actual compromise of netbsd and/or openbsd software distribution.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-02 Thread William Allen Simpson

Weger, B.M.M. de wrote:

The parlor trick demonstrates a weakness of the pdf format, not MD5.


I disagree. We could just as easy have put the collision blocks
in visible images.


Parlor trick.


... We could just as easy have used MS Word
documents, or any document format in which there is some way
of putting a few random blocks somewhere nicely.


Parlor trick.


... We say so on
the website. We did show this hiding of collisions for other data
formats, such as X.509 certificates


More interesting.  Where on your web site?  I've long abhorred the
X.509 format, and was a supporter of a more clean alternative.


... and for Win32 executables.


Parlor trick.

So far, all the things you mention require the certifier to be suborned.



Our real work is chosen-prefix collisions combined with
multi-collisions. This is crypto, it has not been done before,


Certainly it was done before!  We talked about it more than a decade ago.
We knew that what was computationally infeasible would become feasible.

Every protocol I've designed or formally reviewed is protected against the
chosen prefix attack.  (To qualify, where I had final say.  I've reviewed
badly designed protocols, such as IKE/ISAKMP.  And I've been overruled by
committee from time to time)

What *would* be crypto is the quantification of where MDx currently falls
on the computational spectrum.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linus: Security is people wanking around with their opinions

2007-10-02 Thread William Allen Simpson

I often say, Rub a pair of cryptographers together, and you'll
get three opinions.  Ask three, you'll get six opinions.  :-)

However, he's talking about security, which often isn't quantifiable!

And don't get me ranting about provable security  Had a small
disagreement with somebody at Google the other week, as he complained
that variable moduli ruined the security proof (attempts) for SSH.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Security Implications of Using the Data Encryption Standard (DES)

2006-12-28 Thread William Allen Simpson

Leichter, Jerry wrote:

| note that there have been (at least) two countermeasures to DES brute-force
| attacks ...  one is 3DES ... and the other ... mandated for some ATM networks,
| has been DUKPT. while DUKPT doesn't change the difficulty of brute-force
| attack on single key ... it creates a derived unique key per transaction and
| bounds the life-time use of that key to relatively small window (typically
| significantly less than what even existing brute-force attacks would take).
| The attractiveness of doing such a brute-force attack is further limited
| because the typical transaction value is much less than the cost of typical
| brute-force attack
Bounds on brute-force attacks against DESX - DES with pre- and post-whitening
- were proved a number of years ago.  They can pretty easily move DES out
of the range of reasonable brute force attacks, especially if you change
the key reasonably often (but you can safely do thousands of blocks with
one key).

One can apply the same results to 3DES.  Curiously, as far as I know there
are to this day no stronger results on the strength of 3DES!

I find it interesting that no one seems to have actually made use of these
results in fielded systems.  Today, we can do 3DES at acceptable speeds in
most contexts - and one could argue that it gives better protection against
unknown attacks.  But it hasn't been so long since 3DES was really too
slow to be practical in many places, and straight DES was used instead,
despite the vulnerability to brute force.  DESX costs you two XOR's - very
cheap for what it buys you.


The IETF/IESG refused to publish the ESP DES-XEX3-CBC Transform submitted
as draft-ietf-ipsec-ciph-desx-00 (1997) and draft-simpson-desx-01 and
draft-simpson-desx-02 (1998).

Of course, they also refused to publish draft-simpson-des-as-00 (1998) and
draft-simpson-des-as-01 (1999) that deprecated DES -- despite strong
votes of support at SAAG and PPP meetings.

There was an Appeal of IESG inaction, decisions of 13 Oct 1999 and 16 Feb 
1999.
http://www1.ietf.org/mail-archive/web/ietf/current/msg11160.html

The NSA and Cisco folks that were involved in IKE/ISAKMP advocated DES,
refusing to assign code points for DESX.  Gosh, I wonder why

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


IKE resource exhaustion at 2 to 10 packets per second

2006-07-27 Thread William Allen Simpson

http://www.nta-monitor.com/posts/2006/07/cisco-concentrator-dos.html

The vulnerability allows an attacker to exhaust the IKE resources on a
remote VPN concentrator by starting new IKE sessions faster than the
concentrator expires them from its queue. By doing this, the attacker
fills up the concentrator's queue, which prevents it from handling valid
IKE requests.

The exploit involves sending IKE Phase-1 packets containing an
acceptable transform. It is not necessary to have valid credentials in
order to exploit this vulnerability, as the problem occurs before the
authentication stage. The vulnerability affects both Main Mode and
Aggressive Mode, and both normal IKE over UDP and Cisco proprietary
TCP-encapsulated IKE.

In order to exploit the vulnerability, the attacker needs to send IKE
packets at a rate which exceeds the Concentrator's IKE session expiry
rate. Tests show that the target concentrator starts to be affected at a
rate of 2 packets per second, and is becomes unusable at 10 packets per
second. As a minimal Main Mode packet with a single transform is 112
bytes long, 10 packets per second corresponds to a data rate of slightly
less than 9,000 bits per second.

...

The vulnerability was first discovered on 4th July 2005, and was reported
to Cisco's security team (PSIRT) the same day. Cisco responded on 9th
August 2005, but no further progress has been made, over a year after
finding the flaw.



Gosh and golly gee, how could this vulnerability slip past them without
anybody noticing?

... other than the person posting an internet-draft that the IESG refused
to publish as an RFC, that was instead published in ;login: December 1999.

... that attack threat was mentioned in the design principles of Photuris
circa 1995, that the IESG also refused to publish until after the
NSA-originated and approved IKE/ISAKMP protocol.

It's particularly amusing that Photuris was overwhelmingly approved in a
straw poll conducted by John Gilmore at the 36th IETF in Montreal, 1996,
but Cisco issued a press release that they had chosen the NSA-designed
protocol instead.  Protocol adoption by press release, such a good choice.

They just had the 66th IETF in Montreal a week ago.  Full circle.

Anybody ready to order Photuris from your vendors?




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NSA knows who you've called.

2006-05-12 Thread William Allen Simpson

Perry E. Metzger wrote:

http://www.usatoday.com/news/washington/2006-05-10-nsa_x.htm


Legal analysis from Center for Democracy and Technology at:

http://www.cdt.org/publications/policyposts/2006/8

--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: what's wrong with HMAC?

2006-05-02 Thread William Allen Simpson

Hal Finney wrote:

Travis H. writes:

Ross Anderson once said cryptically,

HMAC has a long story attched to it - the triumph of the
theory community over common sense

He wouldn't expand on that any more... does anyone have an idea of
what he is referring to?


I might speculate, based on what you write here, that he believed that
the simpler, ad hoc constructions often used in the days preceding
HMAC were good enough in practice, and that the theoretical proofs of
security for HMAC were given too much weight.  The original HMAC paper
is at http://www-cse.ucsd.edu/~mihir/papers/kmd5.pdf and the authors
show in section 6 various attacks on ad hoc constructions, but some of
them are admittedly impractical.


Actually, that paper really describes version-2 (or even version-N) of
HMAC, as the original design paper had some serious flaws.

And the other constructions were not so much /ad hoc/ (they had been
proposed by various established security folks with varying amounts of
accompanying math) as *incompletely analyzed*.  A part of the problem is
that independent analysis wasn't forthcoming until long after
implementation.  The problem wasn't considered enough of a hot topic at
the time.

Another part of the problem was that the publication lag of RFCs was (is)
so ridiculously long.  The envelope method published in RFC 1828 was a
variant of the original developed as part of the IPv6 design circa 1993:
  key, fill, datagram, key, fill

but had been replaced circa 1995 by IP-MAC (in Photuris):
  key, fill, datagram, fill, key, fill

yet was not officially published (due to politics) for MD5 until:
* RFC 2522, Photuris: Session-Key Management Protocol, March 1999.

and SHA1 even later (took so long it was published as Historic):
* RFC 2841, IP Authentication using Keyed SHA1 with Interleaved Padding
  (IP-MAC), November 2000.

Filling (padding to the natural block boundary of the algorithm) was/is
accomplished by the usual M-D strengthening technique.

I had a preliminary paper showing that the nested N-MAC/H-MAC design was
actually *weaker* than envelope style IP-MAC, but at the request of some
colleagues saved it for a book they were putting together.  Sadly, that
book was never published.

The basic problem is that the nested method truncates the internal
chaining variables, while the envelope method preserves them and
truncates only upon final output.

Of course, AFAICT, the trailing key makes the various recent attacks
on MD5 and SHA1 entirely inapplicable.
--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-04-10 Thread William Allen Simpson

Had a bit of time waiting for a file to download, and just read the paper
that's been sitting on my desktop.  The analysis of the weakness is new,
but sadly many of the problems werre already known, and several previously
discussed on this list!

The forward secrecy problem was identified circa 1995 by Phil Karn, who
therefore saved the changed state after generating each random key --
something similar to the paper's suggestion.

The lack of jitter in millisecond event time was also identified by Karn,
and he developed i386 code to determine microseconds from processor timing.
Sorry, I cannot remember whether it only worked on 386 and above, or also
186/286 we were using in cell phones at the time.  But I certainly used
it in a number of routers over the years

We also noticed the event jitter was more important for unpredictability
than the actual event values, and all my code just added the value to the
microsecond time.  The code was fast enough to handle very rapid interrupt
time events by leaving complex functions for later.  This assumes a
cryptographically strong output function will sufficiently hash the bits
that calculating and saving the jitter itself is a waste of effort.

We also always used any network checksum that came across the transom,
including packets, IP, UDP, and TCP.  Yes, it is externally visible, but
the microsecond time is not, and adding them makes the actual pool values
less predictable (although within a constrained range).

Also, rather than deciding the pool was full of entropy, we just kept
XOR'ing the new values with the old, as a circular buffer (again similar
to the paper's suggestion).

Finally, a lot of this was discussed in public, and both Karn's and my
code variants were publicly available.  I don't have my old email
backups online, but I'm sure it was discussed at places such as the
tcp-group and ipsec circa 1995.

After the first Yarrow draft, it was discussed on the old linux-ipsec list
circa 1999 April 22, and on this list circa 1999 August 17.

After much discussion, Theodore Y. Ts'o wrote
([EMAIL PROTECTED]):
Date: Sun, 15 Aug 1999 10:00:01 -0400
From: William Allen Simpson [EMAIL PROTECTED]
Catching up, and after talking with John Kelsey and Sandy Harris at
SAC'99, it seems clear that there is some consensus on these lists that
the semantics of /dev/urandom need improvement, and that some principles
of Yarrow should be incorporated.  I think that most posters can be
satisfied by making the functionality of /dev/random and /dev/urandom
more orthogonal.

 Bill, you're not the IETF working group chairman on /dev/random, and
 /dev/random isn't a working group subject to consensus.  I'm the author,
 with the sole responsibility to make decisions about what's best for the
 device driver.  Of course, if someone else wants to make an alternative
 /dev/random driver, they're free to use it in their system.  They can
 even petition Linus Torvalds to replace theirs with mine, although I
 doubt they'd get very far.

Unfortunately, the fact that Linux remains vulnerable to the iterative
guessing attack was really due to Ted's intransigence, and some personal
relationship that he enjoys with Linus.

Thank you for the independent analysis once again bringing this topic to
everybody's attention.  Hard to believe that another 7 years have passed.

--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-19 Thread William Allen Simpson

Florian Weimer wrote:

Even back then, the integer encoding was considered to be a mistake.

| I concur completely. I once got so fed up with this habit that I
| tromped around the office singing, Every bit is sacred / Every bit
| is great / When a bit is wasted / Phil gets quite irate.
| 

Ah yes, Phil really _is_ like that, but then he was often working with
2400 bps satellites and ARRL links.  Did bring a smile :-)

But as another point, Phil was against having length fields in most
cases.  The transform parameters started as a list of single bytes, but
the working group (a misnomer) insisted on length fields.  I remember
Phil slumping down in his seat.  I convinced him we could treat them
all as 2 byte constants, since the length didn't actually vary.  ;-)



| Consider this to be one of the prime things to correct. Personally,
| I think that numbers should never (well, hardly ever) be smaller
| than 32 bits.

(Jon Callas, 1997-08-08)


Ah yes, a couple of years after Photuris.  And wasn't Jon the _author_
of the PGP variable length integer specification?  Hoisted on his petard?

I did use all 32-bit length fields in RADIUS.  Different environment.

And finally, the reason for the extra specification of an extension
beyond 32-bit lengths was provided by an obscure fellow by the name of
Rivest.  He argued (insisted vehemently) that security specifications
should take all possible future-proofing precautions, even though we
currently don't see the need, so that the specification is _never_
ambiguous.  (I'm paraphrasing, he was far more loquacious.)



Variable-length integers within other fields, for example.  You can't
avoid this phenomenon in its entirety, of course, without sacrificing
some of the advantages of a binary encoding.


There aren't any.  I could, and did.

Have you actually read all (or any) of the specifications?



I like ISAKMP as much as the next guy, but somehow I doubt that
simpler protocols necessarily lead to more robust software.  Sure,
less effort is needed to implement them, but writing robust code still
comes at an extra cost. *sigh*


It's a sad day when reliability greater than provided by M$ or Netscape
is considered extra cost.

I've always believed robust security merits the same attention to detail
that is needed in a device driver.  And I came of age in programming
communication device drivers when there was no guarantee that the
backplane would successfully carry the interrupt saying a byte had been
transferred, so you had a software timer to initiate a task to separately
query the hardware for lost characters or overruns.  And another hardware
(watch timer) interrupt just in case the software got stuck.

IBM 1800.  Alpha Micro.  HP 21MX.  IBM PC PIC.  Zilog.  Embedded process
control.  Electronically and physically noisy factory floor environments.

But it helps a lot when the specification is written hand-in-hand with
the code, so that every opportunity is taken to simplify the code.

So, where is the community to replace ISAKMP with something more robust?

Provos' Photuris code could be running on all the BSDs in a few months.
Maybe sooner, were payment involved.
--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-18 Thread William Allen Simpson

Florian Weimer wrote:

Photuris uses a baroque variable-length integer encoding similar to
that of OpenPGP, a clear warning sign. 8-/ 


On the contrary:

 + a VERY SIMPLE variable-length integer encoding, where every number
   has EXACTLY ONE possible representation (unlike ASN.1 which even the
   spell-checker wants to replace with assinine).

 + similar to that of OpenPGP, the most common Open Source security
   software of the era, where the code could be easily reused (as it
   was in the initial implementation).



The protocol also contains
nested containers which may specify conflicting lengths.  This is one
common source of parser bugs.


On the contrary, where are internal nested containers in the protocol?

However, as most things that cross the INTER-net, the packets are
encapsulated in UDP, IP, and some media frame, all of which may have
their own length.  That why there are copious implementation notes,
saying for example:

   When processing datagrams containing variable size values, the length
   must be checked against the overall datagram length.  An invalid size
   (too long or short) that causes a poorly coded receiver to abort
   could be used as a denial of service attack.

I remember some observers complaining about the 17 warnings concerning
comparing the variable length to the UDP length, saying it cluttered
the specification.

I remember some implementers cheering about the 17 warnings concerning
comparing the variable length to the UDP length, saying it helped
clarify the specification as they wrote the code.

I defy you to find an INTER-net protocol without RTP/TCP/UDP, IP, and
media framing

At the time, I only had 17 years of protocol implementation experience.
Another decade later, it still seems (to me) one of my better efforts.

Again, the ISAKMP flaws were foreseeable and avoidable.  And Photuris
was written before the existence of ISAKMP.

--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-17 Thread William Allen Simpson

Paul Hoffman wrote:
 At 2:29 PM -0500 11/15/05, Steven M. Bellovin wrote:
 I mostly agree with you, with one caveat: the complexity of a spec can
 lead to buggier implementations.

 Well, then we fully agree with each other. Look at the message formats
 used in the protocols they have attacked successfully so far.

 Humorously, security folks seem to have ignored this when designing our
 protocols.


Later, Peter Gutmann wrote:

In this particular case if the problem is so trivial and easily avoided, why
does almost every implementation (according to the security advisory) get it
wrong?


Quoting draft-simpson-danger-isakmp-01.txt, published (after being
blocked by the IETF for years) as:
  http://www.usenix.org/publications/login/1999-12/features/harmful.html

  A great many of the problematic specifications are due to the ISAKMP
  framework.  This is not surprising, as the early drafts used ASN.1,
  and were fairly clearly ISO inspired.  The observations of another
  ISO implementor (and security analyst) appear applicable:

The specification was so general, and left so many choices, that it
was necessary to hold implementor workshops to agree on what
subsets to build and what choices to make.  The specification
wasn't a specification of a protocol.  Instead,  it was a framework
in which a protocol could be designed and implemented.  [Folklore-00]

  [Folklore-00]  Perlman, R., Folklore of Protocol Design,
  draft-iab-perlman-folklore-00.txt, Work In Progress, January 1998.

Quoting Photuris: Design Criteria, LNCS, Springer-Verlag, 1999:

  The hallmark of successful Internet protocols is that they are
  relatively simple.  This aids in analysis of the protocol design,
  improves implementation interoperability, and reduces operational
  considerations.

Compare with Photuris [RFC-2522], where undergraduate (Keromytis) and
graduate (Spatscheck, Provos) students independently were able to
complete interoperable implementations (in their spare time) in a
month or so

So, no, some security folks didn't ignore this ;-)
--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


European country forbids its citizens from smiling for passport photos

2005-09-17 Thread William Allen Simpson

Do you really need to click on this link to know which one it is?

http://cbs5.com/watercooler/watercooler_story_258152613.html

I guess we should give neutral facial expressions for the photo, then
smile (or frown) while in the airport

Sounds like the technology (still) isn't ready for prime time.

(seen at http://isthatlegal.org)
--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-26 Thread William Allen Simpson
Ian G wrote:
The *requirement* is that the generator not leak
information.
This requirement applies equally well to an entropy
collector as to a PRNG.
Now here we disagree.  It was long my understanding
that the reason the entropy device (/dev/random)
could be used for both output and input, and blocked
awaiting more entropy collection, was the desire to
be able to quantify the result. 

Otherwise, there's no need to block.
For an entropy collector there are a number of ways
of meeting the requirement.
1.  Constrain access to the device and audit all
users of the device.
2.  set the contract in the read() call such that
the bits returned may be internally entangled, but
must not be entangled with any other read().  This
can trivially be met by locking the device for
single read access, and resetting the pool after
every read.  Slow, but it's what the caller wanted!
Better variants can be experimented on...
Now I don't remember anybody suggesting that before!  Perfect,
except that an attacker knows when to begin watching, and is assured
that anything before s/he began watching was tossed.
In my various key generation designs using MD5, I've always used
MD-strengthening to minimize the entanglement between keys. 

There was MD5 code floating around for many many years that I wrote
with a NULL argument to force the MD-strengthening phase between
uses.  I never liked designs with bits for multiple keys extracted
from the same digest iteration output.
And of course, my IPsec authentication RFCs all did the same.  See
my IP-MAC design at RFC-1852 and RFC-2841.
We are still left with the notion as Bill suggested
that no entropy collector is truly clean, in that
the bits collected will have some small element of
leakage across the bits.  But I suggest we just
cop that one on the chin, and stick in the random(5)
page the description of how reliable the device
meets the requirement.
(This might be a resend, my net was dropping all
sorts of stuff today and I lost the original.)
That's OK, the writing was clearer the second time around.
--
William Allen Simpson
   Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-09 Thread William Allen Simpson
Ian G wrote:
(4A) Programs must be audited to ensure that they do not use
/dev/random improperly.
(4B) Accesses to /dev/random should be logged.
I'm confused by this aggresive containment of the
entropy/random device.  I'm assuming here that
/dev/random is the entropy device (better renamed
as /dev/entropy) and Urandom is the real good PRNG
which doesn't block post-good-state.
Yes, that's my assumption (and practice for many years).
If I take out 1000 bits from the *entropy* device, what
difference does it make to the state?  It has no state,
other than a collection of unused entropy bits, which
aren't really state, because there is no relationship
from one bit to any other bit.  By definition.  They get
depleted, and more gets collected, which by definition
are unrelated.
If we could actually get such devices, that would be good. 

In the real world, /dev/random is an emulated entropy device.  It hopes
to pick up bits and pieces of entropy and mashes them together.  In
common implementations, it fakes a guess of the current level of
entropy accumulated, and blocks when depleted. 

If there really were no relation to the previous output -- that is, a
_perfect_ lack of information about the underlying mechanism, such as
the argument that Hawking radiation conveys no information out of
black holes -- then it would never need to block, and there would
never have been a need for /dev/urandom!
(Much smarter people than I have been arguing about the information
theoretic principles of entropy in areas of physics and mathematics
for a very long time.) 

All I know is that it's really hard to get non-externally-observable
sources of entropy in embedded systems such as routers, my long-time
area of endeavor.  I'm happy to add in externally observable sources
such as communications checksums and timing, as long as they can be
mixed in unpredictable ways with the internal sources, to produce the
emulated entropy device.
Because it blocks, it is a critical resource, and should be logged.
After all, a malicious user might be grabbing all the entropy as a
denial of service attack.
Also, a malicious user might be monitoring the resource, looking for
cases where the output isn't actually very random.  In my experience,
rather a lot of supposed sources of entropy aren't very good.

Why then restrict it to non-communications usages?
Because we are starting from the postulate that observation of the
output could (however remotely) give away information about the
underlying state of the entropy generator(s).
What does it matter if an SSH daemon leaks bits used
in its *own* key generation if those bits can never be
used for any other purpose?
I was thinking about cookies and magic numbers, generally transmitted
verbatum.  However, since we have a ready source of non-blocking keying
material in /dev/urandom, it seems to be better to use that instead of
the blocking critical resource
--
William Allen Simpson
   Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


US Court says no privacy in wiretap law

2004-07-01 Thread William Allen Simpson
Switches, routers, and any intermediate computers are fair game for 
warrantless wiretaps.

That is, at any time (the phrase seconds or mili-seconds [sic]) that 
the transmission is not actually on a wire.

Most important, read the very nicely written dissent.  The dissenting 
judge used the correct terms, referenced RFCs, and in general knew what 
he was talking about -- unlike the 2:1 majority!

http://www.ca1.uscourts.gov/pdf.opinions/03-1383-01A.pdf
  ... Under Councilman's narrow interpretation of the Act, the 
  Government would no longer need to obtain a court-authorized wiretap 
  order to conduct such surveillance. This would effectuate a dramatic 
  change in Justice Department policy and mark a significant reduction 
  in the public's right to privacy. 

Such a change would not, however, be limited to the interception 
  of e-mails. Under Councilman's approach, the government would be free 
  to intercept all wire and electronic communications that are in 
  temporary electronic storage without having to comply with the Wiretap 
  Act's procedural protections. That means that the Government could 
  install taps at telephone company switching stations to monitor phone 
  conversations that are temporarily stored in electronic routers 
  during transmission. 
  [page 51-52]

As this is a US Court of Appeals, it sets precedent that other courts 
will use, and directly applies to all ISPs in the NE US.
-- 
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Feds admit error in hacking conviction

2003-10-17 Thread William Allen Simpson
http://news.com.com/2100-7348_3-5092697.html?tag=st_lh

Federal prosecutors asked a San Francisco appeals court this week to 
reverse a computer-crime conviction that punished a California man for 
notifying a company's customers of a flaw in the company's e-mail service. 

Filed on Tuesday in San Francisco's Ninth District Court of Appeals, the 
unusual request conceded that federal prosecutors in Los Angeles erred in 
bringing a criminal case against, and obtaining the conviction of, 
30-year-old Bret McDanel. The one-time system administrator has already 
served his 16-month sentence and is currently on supervised release, 
during which time his access to computers is curtailed. 
...

If the court agrees to overturn the conviction, it will remove a precedent 
that could have squelched the research of many security experts. The 
original conviction by U.S. District Judge Lourdes G. Baird determined 
that, by revealing a flaw in a system's security, a researcher could be 
accused of harming the system, a violation of computer crime laws.
...

Thom Mrozek, a spokesman for the U.S. attorney's office for the Central 
District of California said that prosecutors rarely ask for a reversal. 
It's pretty damn rare, he said. I have never seen it happen. 
...
-- 
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-09-28 Thread William Allen Simpson
Jeroen C.van Gelderen wrote:
 
 On Saturday, Sep 27, 2003, at 15:48 US/Eastern,
 [EMAIL PROTECTED] wrote:
 
  You have not met my users!
 
 Indeed, but I'm here to learn :)
... 
 something is wrong. Why would she click YES?
... 
 Because I'm an optimist I believe that Alice will read the dialog and
 err on the side of caution. Maybe that isn't realistic. ...
 
 I agree that such composition must be intuitive or we cannot expect it
 to work. I think that CapDesk is a nice publicly available prototype of
 a workable capability desktop. It would be very interesting to see your
 assessment on whether a CapDesk approach would be workable for your
 users. And if it isn't, why not. I hope you can lend your experience.
 
OK, I'll lend mine.  With my ISP hat on, the vast majority of support 
calls have to do with users ignoring the content of M$ dialog boxes,  
hitting YES or OK, then calling when things don't work.  Admittedly, 
the text in those dialog boxes isn't particularly useful.  But this 
costs us a lot of good old hard cash.

Or with my personal hat, my 15-year-old niece had an infected machine.  
Actually a multiply infected machine.  Took me several hours to clean up.  
And then I watched her check her yahoo mail, and click yes on the very 
next Norton/McAfee dialog box, reinfecting her Comcast connected machine 
before my very eyes. 

Why, I asked?  I just spent a lot of time fixing your machine, and 
explained what had gone wrong.  She says, That message came from my 
best friend at school.  

Of course it didn't.  But it probably came from another friend with 
them both in the address book.  And social engineering is a lot more 
powerful than any amount of training, no matter how very recent!

The answer to a technical problem is _not_ depending on user caution!
-- 
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-06-30 Thread William Allen Simpson
Steven M. Bellovin wrote:
 
 In message [EMAIL PROTECTED], Simon Josefsson writes:
 Of course, everything fails if you ALSO get your DNSSEC root key from
 the DHCP server, but in this case you shouldn't expect to be secure.
 I wouldn't be surprised if some people suggest pushing the DNSSEC root
 key via DHCP though, because alas, getting the right key into the
 laptop in the first place is a difficult problem.
 
 
 I can pretty much guarantee that the IETF will never standardize that,
 except possibly in conjunction with authenticated dhcp.
 
Would this be the DHCP working group that on at least 2 occasions 
when I was there, insisted that secure DHCP wouldn't require a secret, 
since DHCP isn't supposed to require configuration?

And all I was proposing at the time was username, challenge, MD5-hash
response (very CHAP-like).  They can configure ARP addresses for 
security, but having both the user and administrator configure a per
host secret was apparently out of the question.
-- 
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]