RE: Fermat's primality test vs. Miller-Rabin

2005-11-16 Thread Anton Stiglic

The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:

... 

 and thus a single test gives ~2^{-13}.

If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit candidate,
we have probability p(X | Y_1) = 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, Y_t
representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much better
than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread Nicholas Bohm
Perry E. Metzger wrote:
 Steven M. Bellovin [EMAIL PROTECTED] writes:
 
Bruce Schneier's newsletter Cryptogram has the following fascinating 
link: http://www.fas.org/irp/eprint/heath.pdf
It's the story of effects of a single spy who betrayed keys and 
encryptor designs.

[...]

 One intriguing question that I was left with after reading the whole
 thing was not mentioned in the document at all. One portion of the
 NSA's role is to break other people's codes. However, we also have to
 assume that equipment would fall into the wrong people's hands at
 intervals, as happened with the Pueblo incident. If properly designed,
 the compromise of such equipment won't reveal communications, but
 there is no way to prevent it from revealing methods, which could then
 be exploited by an opponent to secure their own communications.
 
 Does the tension between securing one's own communications and
 breaking an opponents communications sometimes drive the use of COMSEC
 gear that may be too close to the edge for comfort, for fear of
 revealing too much about more secure methods? If so, does the public
 revelation of Suite B mean that the NSA has decided it prefers to keep
 communications secure to breaking opposition communications?

Of historical interest on this question there is useful material in
Between Silk and Cyanide by Leo Marks.

Marks was responsible for ciphers used during WWII by SOE for
communications with agents in German occupied Europe.  He describes an
episode when he was visited by people from Bletchley Park who were
concerned that he was equipping agents with ciphers that (he deduced)
were too strong for Bletchley Park to attack if they should fall into
German hands and come into use by them.

It is understandable, particularly during the Battle of the Atlantic,
that UK priorities should have been to maintain the availability of
breaks into enemy traffic even at the risk of hazarding communications
with agents.  (If Britain had failed in the Atlantic the war in the west
would have been over.  If SOE failed, there were no short-term
consequences of similar seriousness.)

The preservation of secrecy about those breaks for nearly thirty years
after the end of the war suggests that those priorities may have become
ossified, which may in turn account for excessive governmental anxieties
over the spread of strong cryptography.  Any change in these priorities
would be of great interest.

Nicholas Bohm
-- 
Salkyns, Great Canfield, Takeley,
Bishop's Stortford CM22 6SX, UK

Phone   01279 871272(+44 1279 871272)
Fax  020 7788 2198   (+44 20 7788 2198)
Mobile  07715 419728(+44 7715 419728)

PGP public key ID: 0x899DD7FF.  Fingerprint:
5248 1320 B42E 84FC 1E8B  A9E6 0912 AE66 899D D7FF

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread lrk
On Tue, Nov 15, 2005 at 06:31:30PM -0500, Perry E. Metzger wrote:
 
 Steven M. Bellovin [EMAIL PROTECTED] writes:
  Bruce Schneier's newsletter Cryptogram has the following fascinating 
  link: http://www.fas.org/irp/eprint/heath.pdf
  It's the story of effects of a single spy who betrayed keys and 
  encryptor designs.
 
 Very interesting indeed. I was unaware that the military had such
 astonishingly bad key management practices. One wonders if things have
 actually improved.
 
Probably not. I'm an outsider listening in but what I can hear seems to
say they are no better at key management. Or crypto gear which does not 
get in the way of fast reliable tactical communications.


 One thing one hopes has changed is that one hopes that it is no longer
 necessary for everyone to share the same keying material among so many
 different endpoints. Public key cryptography and key negotiation could
 (in theory) make it unnecessary to store shared secrets for long
 periods of time before use, where they are rendered vulnerable to
 espionage. One hopes that, over the last thirty years, this or
 something analogous has been implemented.

The term broadcast has a special meaning in the radio world. It is by
definition one-way. Thus the fleet broadcast was sent to all the ships
and each picked out it's own messages. Key negotiation probably was never
practical on those circuits.

The broadcast became available via satellite sometime in the sixties. It 
was 75 baud teletype. It is still there today.


 One intriguing question that I was left with after reading the whole
 thing was not mentioned in the document at all. One portion of the
 NSA's role is to break other people's codes. However, we also have to
 assume that equipment would fall into the wrong people's hands at
 intervals, as happened with the Pueblo incident. If properly designed,
 the compromise of such equipment won't reveal communications, but
 there is no way to prevent it from revealing methods, which could then
 be exploited by an opponent to secure their own communications.

I doubt the top-level equipment could fall into the wrong people's hands
as it is probably not in the field. The tactical systems don't need to be
as good since the information is not useful for very long.

With any luck, the EP-3 that landed in China did not give up as much info.
The CD-ROMs for loading the computers become unreadable after a few seconds
in the microwave oven. :)

 
 Does the tension between securing one's own communications and
 breaking an opponents communications sometimes drive the use of COMSEC
 gear that may be too close to the edge for comfort, for fear of
 revealing too much about more secure methods? If so, does the public
 revelation of Suite B mean that the NSA has decided it prefers to keep
 communications secure to breaking opposition communications?

There is probably some level where this is considered but there is little
indication the military is not about as far behind the real world as they
have always been.

We also can hope the intel function has shifted from breaking diplomatic
and military communications to sifting out the gems from the pebbles in
the landslide of general telecomm.

And there is the problem of brainpower. The military and NSA probably have
less now than during real wars.

Note that by current standards, Alan Turing could not get a US security
clearance.



LRK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread leichter_jerrold
On Tue, 15 Nov 2005, Perry E. Metzger wrote:
| Does the tension between securing one's own communications and
| breaking an opponents communications sometimes drive the use of COMSEC
| gear that may be too close to the edge for comfort, for fear of
| revealing too much about more secure methods? If so, does the public
| revelation of Suite B mean that the NSA has decided it prefers to keep
| communications secure to breaking opposition communications?
Remember Clipper?  It had an NSA-designed 80-bit encryption algorithm.  One
interesting fact about it was that it appeared to be very aggressively
designed.  Most published algorithms will, for example, use (say) 5 rounds
beyond the point where differential cryptoanalysis stops giving you an
advantage.  Clipper, on the other hand, falls to differential cryptoanalysis
if you use even one less round than the specification calls for.

Why the NSA would design something so close to the edge has always been a
bit
of a mystery (well, to me anyway).  One interpretation is that NSA simply
has a deeper understanding than outsiders of where the limits really are.
What to us looks like aggressive design, to them is reasonable and even
conservative.

Or maybe ... the reasoning Perry mentions above applies here.  Any time you
field a system, there is a possibility that your opponents will get hold of
it.  In the case of Clipper, where the algorithm was intended to be
published,
there's no possibility about it.  So why make it any stronger than you
have
to?

Note that it still bespeaks a great deal of confidence in your understanding
of the design to skate *that* close to the edge.  One hopes that confidence
is
actually justified for cryptosystems:  It turned out, on the key escrow side
of the protocol design, NSA actually fell over the edge, and there was a
simple attack (Matt Blaze's work, as I recall).

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Clips] Sony DRM infection removal vulnerability uncovered

2005-11-16 Thread R. A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Wed, 16 Nov 2005 12:55:50 -0500
 To: Philodox Clips List [EMAIL PROTECTED]
 From: R. A. Hettinga [EMAIL PROTECTED]
 Subject: [Clips] Sony DRM infection removal vulnerability uncovered
 Reply-To: [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]

 http://www.theinquirer.net/print.aspx?article=27714print=1


 The Inquirer

 Sony DRM infection removal vulnerability uncovered

 Tool is worse than original infection

 By:  Charlie Demerjian  Tuesday 15 November 2005, 20:45

 SONY PULLS OFF ANOTHER blatant stupidity in the 'cure is worse than the
 disease' category. No, not the DRM infection itself, not the security
 compromising removal agreement, but the removal tool itself. Yes, this one
 appears to put you in MORE danger than the original rootkit. Silly Sony, no
 cookie.

  According to Freedon To Tinker, the web based installer is a worse
 vulnerability than the original rootkit. More on the story here, FTT goes
 into detail. It seems the 'cure' from Sony involves downloading an ActiveX
 control called CodeSupport. This is a signed control that lets just about
 anyone download, install and execute arbitrary code on your machine.

  See a problem? See a big problem? To make matters even funnier, the
 uninstaller, supposedly anyway, leaves this control on your machine. So,
 the Sony uninstaller is not a total uninstaller, it leaves a hole you can
 drive a truck through on your system, silently of course.

  The more disturbing part is that it appears the control is signed. I
 wonder who at MS approved this, and how this blatant security hole got
 through the barest minimum of QC? Moral, if you bought Sony products, you
 are screwed. If it causes you problems, you are screwed more. If you
 uninstall, you are screwed yet harder. If you uninstall it yourself, you
 are a criminal under the DMCA. If you use an antivirus program to uninstall
 it, you spent money to fix Sony's problems, and you are still a criminal.
 That's what you get for buying music.ยต





 --
 -
 R. A. Hettinga mailto: [EMAIL PROTECTED]
 The Internet Bearer Underwriting Corporation http://www.ibuc.com/
 44 Farquhar Street, Boston, MA 02131 USA
 ... however it may deserve respect for its usefulness and antiquity,
 [predicting the end of the world] has not been found agreeable to
 experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
 ___
 Clips mailing list
 [EMAIL PROTECTED]
 http://www.philodox.com/mailman/listinfo/clips

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Bugtraq] Schneier's PasswordSafe password validation flaw

2005-11-16 Thread Kerry Thompson
Posted on Bugtraq a few hours ago:

Subject:   Schneier's PasswordSafe password validation flaw
From:   info_at_elcomsoft.com
Date:   Thu, November 17, 2005 1:27

Title : Schneier's PasswordSafe password validation flaw
Date  : November 16, 2005
Product   : PasswordSafe 1.x, 2.x
Discovered by : ElcomSoft Co.Ltd.


Overview
==

PasswordSafe is a program originally written by security expert
Bruce Schneier (http://www.schneier.com) that allows one to store
users' passwords in single file (called safe) which is
encrypted and protected by user's master password (called Safe
Combination) with the Blowfish encryption algorithm. As noted on
PasswordSafe web page, the program's security has been thoroughly
verified by Counterpane Labs under the supervision of Bruce Schneier,
author of Applied Cryptography and creator of the Blowfish algorithm.

As noted in Password Safe FAQ, there is no back door in
PasswordSafe to recover your Safe Combination, but there is a
password-guessing program that some people have used successfully.
The program works by going through a list of possible passwords
and checking each one.

However, there is a design flaw in PasswordSafe, that allows to
perform Safe Combination validation a several times faster than it has
been conceived by the author, which makes brute-force and dictionary
attacks much more effective.

Details
==

As described in PasswordSafe documentation, the PasswordSafe database
has the following format:

RND|H(RND)|SALT|IP|Name1|Password1|Notes1|...|NameN|PasswordN|NotesN

where

RND: 8-byte (64-bit) random value
H(RND) : hash value which depends on password, used along
 with RND to check password (Safe Combination) validity
IP : 8-byte (64-bit) initial vector involved in
 encryption/decryption process
SALT   : 20-byte random value used involved in key derivation

PasswordSafe verifies password validity in following way:

bf_key = sha1 (RND | { 0x00, 0x00 } | PASSWORD);
bf_block = RND;
for (i=0; i1000; i++)
  bf_block = blowfish_encrypt (bf_block, bf_key);
finalhash = sha1_mod (bf_block | {0x00, 0x00});

Then, the 'finalhash' is compared to 'H(RND)' and, if the're
equal then the password is correct.

In pseudocode above sha1_mod() denotes usual SHA-1 computation
with zeroed initial state (this seems to be an implementation
error).

The above key derivation function (KDF) uses so-called
key-stretching method to withstand password-guessing attacks.
This method was introduced in 1997 by Schneier, Kelsey, Hall
and Wagner in Secure Applications of Low-Entropy Keys paper.

However, PasswordSafe contains design flaw which allows
attacker to verify password validity without computing
(relatively slow) KDF.

All records in PasswordSafe database are encrypted with
Blowfish algorithm in CBC (Cipher Block Chaining) mode.
According to the documentation, the first block contains the
length (in bytes) of encrypted data stored as 32-bit (4-byte)
unsigned integer, fifth byte holds type value for current
record (in PasswordSsafe 1.x, it is always zero), and three
remaining bytes are zeros.

Encryption key is derived from user's password simply by
computing sha1(PASSWORD | SALT). Note that this is much
simpler and faster than KDF described above.

To check password for validity, the attacker can simply
calculate the encryption key, decrypt first encrypted block
and check if three most significant bytes are all zero.
The probability for this to occur on random password is
about 2^(-24). If this is true, then the attacker can check
candidate password with full KDF. Since full KDF will be
called rarely (approximately 1 time per 16 million passwords),
this protection against password-guessing attacks becomes
absolutely useless.

With PasswordSafe 2.x, slightly more effective attack is
possible. The first record of PasswordSafe 2.x database
always has fixed length and type (i.e. full plaintext block
is known), and this allows to check passwords with
probability 2^(-64).

Impact
==

PasswordSafe is used to store sensitive data, and so the presence
of such flaws may help attacker to disclose user's logins,
passwords and PINs by implementing fast and effective brute-force
and dictionaery attacks.

Solution/workaround
==

No known solution is available at the time of publishing this
advisory.

Users should use strong passwords or passphrases. We recommend to use
random alphanumeric passwords that are not shorter than 8 characters.

References
==

Bruce Schneier - Password Safe
http://www.schneier.com/passsafe.html

Password Safe FAQ
http://www.schneier.com/passsafe-faq.html

SourceForge.net: Project Info - Password Safe

timing attack countermeasures (nonrandom but unpredictable delays)

2005-11-16 Thread David Wagner
Travis writes:
The naive countermeasure to timing attacks is to add a random delay,
but of course that can be averaged out by repeating the computation. 
I have never heard anyone propose a delay that is based on the input,
and maybe some per-machine secret, so that it is unpredictable but
constant.  Of course this wouldn't solve averaging across different
inputs from some subset, but it would prevent averaging on the same
value.

Sadly, I don't think this works.

In many cases, the observed time depends both on the input and on some
other random noise.  In such cases, averaging attacks that use the same
input over and over again will continue to work, despite the use of
a pseudorandom input-dependent delay.  For instance, think of a timing
attack on AES, where the time compute the map X |-- AES(K,X) depends only
on K and X, but where the measured time depends on the computation time
(which might be a deterministic function of K and X) plus the network
latency (which is random).  Indeed, in this example even the computation
time might not be a deterministic function of K and X: it might depend
on the state of the cache, which might have some random component.

And there are many settings where you can average across many slightly
different inputs.

So I don't think this kind of approach is likely to anywhere, except
possibly in a few specialized settings.

In many cases, a better defense than random delays is to make the total
time constant and independent of the inputs.  Then averaging is pointless.
It's not always possible, but when it is, it's worth considering.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]