Crypto expert: Microsoft flaw is serious

2005-01-27 Thread R.A. Hettinga


Techworld.com -  

27 January 2005
Crypto expert: Microsoft flaw is serious
Microsoft should sort flaw and abandon RC4 in favour of better ciphers,
says PGP creator.


By John E. Dunn, Techworld

Cryptography expert Phil Zimmermann has said he believes the flaw
discovered in Microsoft's Word and Excel encryption is serious and warrants
immediate attention.

"I think this is a serious flaw - it is highly exploitable. It is not a
theoretical attack," said Zimmermann, referring to a  flaw  in Microsoft's
use of RC4 document encryption unearthed recently by a researcher in
Singapore.

 "The lay user ought to be entitled to assume that the encryption produced
by Microsoft is adequate. [Š] If Microsoft wants to earn the respect of the
cryptographic community and the public it must rise to the occasion by
producing competent security."

Microsoft has been dismissive of the seriousness of the flaw, which relates
to the way it has implemented the RC4 encryption stream cipher. As
explained by Hungjun Wu of the Institute of Infocomm Research, it would
allow anyone able to gain access to two or more versions of the same
password and encrypted document to reverse engineer the scheme used to make
it secure.

"Stream ciphers have to be used most carefully. Any failure to do this will
result in a disastrous loss of security," Zimmermann said. "Even with a
properly chosen initialisation vector, you have to run it for a while
before the quality of the stream cipher is good enough to use." Contrary to
Microsoft's claims that the issue was a "very low threat", he countered
that gaining access to a document would not present problems for a
determined hacker. "There are tools one can use to cryptanalyse messages in
this way."

 Even if the flaw was fixed, in his view a more fundamental problem was
Microsoft's use of RC4, licensed from RSA Security.

"Why does Microsoft continue to use RC4 in this day and age? It has other
security flaws that have been published in other papers," adding that "RC4
is a proprietary cipher and has not stood up well to peer review. They
should just stop using RC4. It would be better to switch to a block cipher."

When contacted Microsoft, was unable to commit to a timescale for
correcting the flaw but issued the following statement by way of a
spokesperson: "Microsoft is still investigating this report of a possible
vulnerability in Microsoft Office. When that investigation is complete, we
will take the appropriate actions to protect customers. This may include
providing a security update through our monthly release process."

Zimmermann, meanwhile, emphasised the need for responsible disclosure of
such problems. "The best way is to quietly disclose the problem to the
vendor and then allow the vendor 30 days to fix the problem. Then go
public," he said.

Phil Zimmermann is best-known as the creator of Pretty Good Privacy (PGP),
a desktop encryption program that was powerful enough that the US
authorities attempted to have its distribution stopped and Zimmermann
imprisoned for writing it. The case was abandoned 1996. PGP was bought out
by Network Associates, though an independent company, PGP Corporation, has
since been spun out to develop its core technology.

-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-27 Thread John Kelsey
>From: William Allen Simpson <[EMAIL PROTECTED]>
>Sent: Jan 11, 2005 1:48 PM
>To: cryptography@metzdowd.com
>Subject: Re: entropy depletion

>Ben Laurie wrote:
>> Surely observation of /dev/urandom's output also gives away information?
>>
>ummm, no, not by definition.

>/dev/random
> blocks on insufficient estimate of stored entropy
>  useful for indirect measurement of system characteristics
>  (assumes no PRNG)

>/dev/urandom
>  blocks only when insufficient entropy for initialization of state
>  computationally infeasible to determine underlying state
>  (assumes robust PRNG)

So, the big issue here is that  we're counting on a cryptographic algorithm to 
both provide full entropy outputs and to mask the different outputs from one 
another.  There's no guarantee that it can do either.  That is, even if another 
160 bits of entropy have been put into the pool, there's no guarantee that 
there will be no relationship between the next 80 bit output and the last one.  
That depends on your beliefs about SHA1, and about unproven properties of it. 
(It's been a long time since I've looked at the algorithm used by /dev/random, 
but I think there are some narrow pipe issues there which might limit the total 
entropy that can affect a sequence of outputs from a sequence of inputs.)  

>William Allen Simpson

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-27 Thread John Kelsey
>From: "Steven M. Bellovin" <[EMAIL PROTECTED]>
>Sent: Jan 11, 2005 10:58 AM
>To: cryptography@metzdowd.com
>Subject: Re: entropy depletion 

>Let me raise a different issue: a PRNG might be better *in practice* 
>because of higher assurance that it's actually working as designed at 
>any given time.

This is a good point.  In the ANSI X9.82 work we've been doing (working on a 
standard for random number generation for cryptography), we kind-of make a 
continuum:

PRNGs seeded once --> PRNGs with live entropy sources --> full entropy PRNGs 

The idea here is that you can use a PRNG algorithm in a mode where it's seeded 
once at the factory and runs forever, or where it has access to an entropy 
source but has to produce output bits faster than the entropy source can, or 
where it produces outputs that include as many bits of entropy as bits of 
output.  Any good PRNG algorithm can be run in all three of these modes, with a 
bit of thought.  (We have our own terminology for all this in X9.82; we call a 
PRNG a "DRBG" and a random bit generator producing full entropy an "NRBG".)  

We also distinguish among full-entropy RNGs that include a strong PRNG and 
those that are pure hardware based.  When you're running the PRNG in a 
full-entropy mode (we give constructions for this) you get a guaranteed 
fallback to a secure PRNG even if your entropy source fails.  If you're using a 
pure hardware-based RNG and the hardware fails, you're out of luck.  

>To me, the interesting question about, say, Yarrow is not how well it 
>mixes in entropy, but how well it performs when there's essentially no 
>new entropy added.  Clearly, we need something to see a PRNG, but what 
>are the guarantees we have against what sorts of threats if there are 
>never any new true-random inputs?  

If there's really no entropy ever entered, then no PRNG algorithm can help you. 
 If we ever get to an unguessable state, then Yarrow should (barring some 
clever cryptanalysis) stay in a secure state for as long as we need to use it.  
The tricky bits seem to happen in the middle--when the entropy trickles in at a 
slower rate than expected.  That's what Yarrow's two pool reseeding strategy is 
for, and what Niels Ferguson's Fortuna design does in a pretty-close-to-optimal 
way.  I think these strategies are interesting, but as I've worked on X9.82, I 
have become a lot more concerned with getting the PRNG to a secure starting 
point than with recovering later.  Recovering is important, too, but a lot of 
real-world systems use their first PRNG state to generate their high-value 
signing key, or the session key used to communicate their high-value secrets to 
some server, or whatever.

>   --Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Simson Garfinkel analyses Skype - Open Society Institute

2005-01-27 Thread Ian G
Joseph Ashwood wrote:
[Good analysis!  Snipped...]
Working against them. The biggest thing working against them is that a 
growing number of teenagers are using Skype (a significant portion of 
Gunderson High School in San Jose, Ca actually uses Skype during 
class, and has been busted by me for it). This poses a substantial 
risk for common hacking to occur. This is something that I am unclear 
on whether or not Skype has prepared. As the general populus begins to 
use Skype more the security question becomes of greater importance 
(reference the attacks on Windows that go on every day).

I would say that the threat of a bunch of teenagers
cracking Skype would not be read as a much of a threat
in my book.  What are they going to do, listen to each
other's teenagerish calls?  If they break it, well and good,
as it's one community you can expect to share the break
quickly, which will give Skype the requisite kick up the
backside.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Driver's license scandals raise national security worries

2005-01-27 Thread Russell Nelson
R.A. Hettinga writes:
 > Similar scams have occurred around the country:
 > 
 > _ In New Jersey, nine state motor vehicle employees pleaded guilty to a
 > scheme that involved payoffs for bogus licenses.
 > 
 > _ In Illinois, a federal investigation into the trading of bribes for
 > driver's licenses led to dozens of convictions and the indictment of former
 > Gov. George Ryan on racketeering and other charges.
 > 
 > _ In Virginia, more than 200 people are losing their licenses because of
 > suspected fraud by a former Department of Motor Vehicles worker who
 > allegedly sold licenses for as much as $2,500 each.

This is why we need a national identification card.

It's also why we don't need a national identification card.

The same evidence leads to two different conclusions depending on what
you had already concluded was true.  Reminds me of listening to Alan
Greenspan.  :-)

-- 
--My blog is at angry-economist.russnelson.com  | Freedom means allowing
Crynwr sells support for free software  | PGPok | people to do things the
521 Pleasant Valley Rd. | +1 315-323-1241 cell  | majority thinks are
Potsdam, NY 13676-3213  | +1 212-202-2318 VOIP  | stupid, e.g. take drugs.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-27 Thread Daniel Carosone
On Tue, Jan 11, 2005 at 03:48:32PM -0500, William Allen Simpson wrote:
> >2.  set the contract in the read() call such that
> >the bits returned may be internally entangled, but
> >must not be entangled with any other read().  This
> >can trivially be met by locking the device for
> >single read access, and resetting the pool after
> >every read.  Slow, but it's what the caller wanted!
> >Better variants can be experimented on...
>
> Now I don't remember anybody suggesting that before!  Perfect,
> except that an attacker knows when to begin watching, and is assured
> that anything before s/he began watching was tossed.

The point is interesting and well made, but I'm not sure it was
intended as a concrete implementation proposal.  Still, as long as
it's floating by..

Rather than locking out other readers from the device (potential
denial-of-service, worse than can be done now by drawing down the
entropy estimator?), consider an implementation that made random a
cloning device.  Each open would create its own distinct instance,
with unique state history.

One might initialise the clone with a zero (fully decoupled, but
potentially weak as noted above) state, or with a copy of the base
system device state at the time of cloning, but with a zero'd
estimator. From there, you allow it to accumulate events until the
estimator is ready again.  Perhaps you clone with a copy of the state,
and allow an ioctl to zero the private copy.. perhaps you only allow a
clone open to complete once the base generator has been clocked a few
times with new events since the last clone.. many other ideas.  Most
implementations allow data to be written into the random device, too,
so a process with a cloner open could add some of its own 'entropy' to
the mix for its private pool, if it wanted additional seeding
(usually, such writes are not counted by the estimator).

The base system device accumulates events while no cloners are
open. You'd need some mechanism to distribute these events amongst
accumulating clones such that they'd remain decoupled.

It's an interesting thought diversion, at least - but it seems to have
as many potential ways to hurt as to help, especially if you're short
of good entropy events in the first place.

Like smb, frankly I'm more interested in first establishing good
behaviour when there's little or no good-quality input available, for
whatever reason.

--
Dan.


pgpiBemAGKvdQ.pgp
Description: PGP signature