The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:
...
and thus a single test gives ~2^{-13}.
If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning. Look at table 4.3 of the
Perry E. Metzger wrote:
Steven M. Bellovin [EMAIL PROTECTED] writes:
Bruce Schneier's newsletter Cryptogram has the following fascinating
link: http://www.fas.org/irp/eprint/heath.pdf
It's the story of effects of a single spy who betrayed keys and
encryptor designs.
[...]
One intriguing
On Tue, Nov 15, 2005 at 06:31:30PM -0500, Perry E. Metzger wrote:
Steven M. Bellovin [EMAIL PROTECTED] writes:
Bruce Schneier's newsletter Cryptogram has the following fascinating
link: http://www.fas.org/irp/eprint/heath.pdf
It's the story of effects of a single spy who betrayed keys
On Tue, 15 Nov 2005, Perry E. Metzger wrote:
| Does the tension between securing one's own communications and
| breaking an opponents communications sometimes drive the use of COMSEC
| gear that may be too close to the edge for comfort, for fear of
| revealing too much about more secure methods?
Posted on Bugtraq a few hours ago:
Subject: Schneier's PasswordSafe password validation flaw
From: info_at_elcomsoft.com
Date: Thu, November 17, 2005 1:27
Title : Schneier's PasswordSafe password validation flaw
Date : November 16, 2005
Product : PasswordSafe 1.x,
Travis writes:
The naive countermeasure to timing attacks is to add a random delay,
but of course that can be averaged out by repeating the computation.
I have never heard anyone propose a delay that is based on the input,
and maybe some per-machine secret, so that it is unpredictable but