Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread lists
 Having a public bulletin board of posted emails, plus a protocol
 for anonymously finding the ones your key can decrypt, seems
 like a pretty decent architecture for prism-proof email.
 The tricky bit of crypto is in making access to the bulletin
 board both efficient and private.

This idea has been around for a while but not built AFAIK.
The cryptography mailing list

Re: Crypto dongles to secure online transactions

2009-11-16 Thread lists
Ben Laurie benl writes:

 Anyway, I should mention my own paper on this subject (with Abe
 Singer) from NSPW 2008, Take The Red Pill _and_ The Blue Pill:

In writing on page 2 that you do not need to secure what you
put in an Amazon shopping basket until you come to arrange
payment and delivery you may be overlooking some things.

Amazon's future recommendations are affected by what has
been put in your basket; even if removed later.

A compromised browser could show false prices and availability
so causing you to choose expensive used goods from a crook and
not discover cheaper sources.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Unattended reboots (was Re: The clouds are not random enough)

2009-08-03 Thread lists
Arshad Noor arshad.noor wrote:

 to the keys, in order for the application to have access to the keys in
 the crypto hardware upon an unattended reboot, the PINs to the hardware
 must be accessible to the application.  If the application has automatic
 access to the PINs, then so does an attacker who manages to gain entry
 to the machine.

 If you (or anyone on this forum) know of technology that allows the
 application to gain access to the crypto-hardware after an unattended
 reboot - but can prevent an attacker from gaining access to those keys
 after compromising a legitimate ID on the machine - I'd welcome hearing
 about it.  TIA.

You could have a device that uses the keys only once for each time
it is powered on, and see that the intended process uses it early
in the boot process to answer the challenge of whatever it's
authenticating to.  This could be simulated in s/w using something
such as BSD securelevel or having a different sudoers file for
one part of the boot process.

Then you're going to want only cold reboots, and if your device doesn't
work when expected you'd wonder whether someone beat you to it.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Decimal encryption

2008-08-27 Thread lists
Philipp G├╝hring wote:

 I am searching for symmetric encryption algorithms for decimal strings.
 Let's say we have various 40-digit decimal numbers:
 As far as I calculated, a decimal has the equivalent of about 3,3219
 bits, so with 40 digits, we have about 132,877 bits.

English readers normally use . as the decimal point - you had me confused
for a few seconds and maybe it wasn't only me.

Regardless of the calculated bit-equivalent you aren't storing these strings
in 132.877 bits - but possibly 40*8 bits, 40*4 bits or in some other way.
 Now I would like to encrypt those numbers in a way that the result is a
 decimal number again (that's one of the basic rules of symmetric
 encryption algorithms as far as I remember).

I don't think that's a feature of the encryption as such.
 Since the 132,877 bits is similar to 128 bit encryption (like eg. AES),
 I would like to use an algorithm with a somewhat comparable strength to AES.
 But the problem is that I have 132,877 bits, not 128 bits. And I can't
 cut it off or enhance it, since the result has to be a 40 digit decimal
 number again.

This sounds like possible confusion over block length and key size.  Then
you get involved in padding and storage of a slightly larger ciphertext.
 Does anyone know a an algorithm that has reasonable strength and is able
 to operate on non-binary data? Preferrably on any chosen number-base?

It sounds as if you want a stream cipher arrangement that you could make
out of a normal binary stream cipher by:
   read a byte of the keystream
   if  9 reject it and take the next one (aiming for uniform distribution)
   if the value is [0-9] add it to the current plaintext digit mod 10

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Looking through a modulo operation

2008-07-23 Thread lists

Matt Ball matt.ball wrote

 Here is a C implementation of __random32:
 typedef unsigned long u32;
 struct rnd_state { u32 s1, s2, s3; };
 static u32 __random32(struct rnd_state *state)
 #define TAUSWORTHE(s,a,b,c,d) ((sc)d) ^ (((s a) ^ s)b)
 state-s1 = TAUSWORTHE(state-s1, 13, 19, 4294967294UL, 12);
 state-s2 = TAUSWORTHE(state-s2,  2, 25, 4294967288UL, 4);
 state-s3 = TAUSWORTHE(state-s3,  3, 11, 4294967280UL, 17);
 return (state-s1 ^ state-s2 ^ state-s3);

I see TAUSWORTHE (briefly tested with the above constants) isn't a
permutation of the 32-bit input state and is going to get very dull
when s is 0.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Lack of fraud reporting paths considered harmful.

2008-01-25 Thread lists
Perry wrote:

 His firm routinely discovers attempted credit card fraud. However,
 since there is no way for them to report attempted fraud to the credit
 card network (the protocol literally does not allow for it), all they
 can do is refuse the transaction -- they literally have no mechanism
 to let the issuing bank know that the card number was likely stolen.

A former boss has become Head of Fraud Technology (I asked him who
was Head of Anti-Fraud Technology) and he answers like this.

  I am not really a cards man but I would have said the good
  old telephone, a call to the acquirer, would be the way. The
  acquirer would then pass that on to the issuer. Granted the
  merchant may not know for certain that had happened, but he
  has done his duty at that point.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Death of antivirus software imminent

2008-01-14 Thread lists

From: Alex Alten [EMAIL PROTECTED]

Writing in support of CALEA capability to assist prosecuting botnet
operators etc ...

 Generally any standard encrypted protocols will probably eventually have
 to support some sort of CALEA capability.

So you havn't heard that the UK has closed down the National High-Tech Crime 
and the current way to report computer crime is at your police station (good 
with that).  And there's not much sign of anyone else doing much better.
Here's some recent news:

Leaving aside the points others have made about how you can't expect the
cooperation of the crooks you are supposedly aiming for what staggers me
is that after 9 years on this list you still think the government -
any government - is looking out for your interests.

Also why is this thread called Death of antivirus?  What examples can
anyone give me in corporate or mass-market IT of people stopping doing
something merely because it didn't work?

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

No PAL please, we're British

2007-11-15 Thread lists
According to this BBC story until fairly recently the British
military refused to have PALs on nuclear weapons.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Password hashing

2007-10-13 Thread lists
This does not extend the discussion at hand, but it might be useful to
some here who may have to deal with FIPS 140-2.

On 13 Oct 2007 09:32:44 +1000, Damien Miller wrote:
 Some comments:
 * Use of an off-the-shelf algorithm like SHA1 might be nice for tick here
   for FIPS certification, but they render the hashing scheme more
   vulnerable to dictionary attacks assisted by (near-)commodity hardware.
   Contrast with OpenBSD's blowfish scheme, which is deliberately designed
   to not be implementable using off-the-shelf crypto accelerator chips.

Although there are password hashing mechanisms built from FIPS-approved
algorithms (e.g., SHA-1 HMAC), there are no FIPS-approved password
hashing mechanisms specifically defined. Meaning, I think there is some
room to move here.

Now, assuming passwords are a critical security parameter (CSP) for a
module, password hashing built from non-FIPS-approved algorithms
automatically means the generated password hashes are considered to be
CSPs in the clear for FIPS 140-2 purposes (i.e., the password hashes are
just considered to an obfuscated form of the plaintext password), and so
we have to deal with the requirements revolving around plaintext CSPs
for those password hashes. Inside of the cryptographic boundary of a
module, CSPs can be maintained in plaintext, as they are considered
protected by the security mechanisms of the module, which gives us a
foothold for using such password hashing mechanisms.

However, while the passwords are considered in the clear, the fact they
are undergoing password hashing is not ignored - the authentication
mechanism must still meet the applicable authentication requirements of
FIPS 140-2, so the password hashing must not cause the password-based
authentication to fail to meet those FIPS 140-2 requirements. And, I
think password hashing mechanisms built from non-FIPS-approved
algorithms can still be used to help meet some FIPS 140-2 authentication
requirements - e.g, I can envision bcrypt being configured such that,
given a particular module's hardware, it slows does authentication
attempts sufficiently to satisfy some strength of authentication


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Full Disk Encryption solutions selected for US Government use

2007-10-10 Thread lists
On 8 Oct 2007 10:12:58 -0700, Stephan Somogyi wrote:
 At 02:11 +1300 09.10.2007, Peter Gutmann wrote:
 But if you build a FDE product with it you've got to get the entire product
 certified, not just the crypto component.
 I don't believe this to be the case.
 FIPS 140(-2) is about validating cryptographic implementations. It is 
 not about certifying entire products that contain ample functionality 
 well outside the scope of cryptographic evaluation. That's more of a 
 Common Criteria thing.

Yes, but an FDE product built on the OSSL FIPS module would not likely
meet the FIPS 140-2 check box, as there is potentially more FIPS 140-2
relevant functionality in the FDE product beyond what was validated in
the OSSL module, such as, say, the whole key life cycle for the FDE
product. That does not necessarily mean all of the FDE product is FIPS
relevant, so perhaps the FIPS relevant functionality in the FDE product
could be self-contained and validated by itself, or perhaps the whole
FDE product could be validated and the irrelevant functionality just
excluded from the FIPS requirements, etc. (As Gutmann says though,
vendors sometimes successfully employ a bit of hand-waving here, so you
never quite know what will satisfy the FIPS check box.)

 At 14:04 +0100 08.10.2007, Ben Laurie wrote:
 ? OpenSSL has FIPS 140.
 OpenSSL FIPS Object Module 1.1.1 has FIPS 140-2 when running on SUSE 
 9.0 and HPUX 11i, according to
 In the context of a conversation about whether something formally has 
 FIPS validation or not, the details are important.

Yes, the details are important. The OSSL FIPS module was tested on those
platforms, but is vendor affirmed on other platforms, assuming the
module meets the vendor affirmation requirements described in the FIPS
140-2 implementation guidance on a given other platform.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Scare tactic?

2007-09-21 Thread lists

Ivan Krstic

 ... But hey, if the peer is malicious or compromised to begin with,
 it could just as well do DH normally and explicitly send the secret
 to the listener when it's done. Not much to see here.

But it gets more interesting if the endpoints are not completely and
solely controlled by Alice and Bob.  Suppose the computers and communication
link are protected from tampering but that interfering with the power supply
sometimes produces a DH private key of 0.

What about a (covert and deniable) contribution to a project?
Underhanded prime selection appears in the ElGamal-RSA discussion
by Piper and Stephens in ISBN 0-19-853691-7.  

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

RE: Another Snake Oil Candidate

2007-09-14 Thread lists
On  12 Sep 2007 20:18:22 -0700, Aram Perez wrote:
 I don't about you, but when I hear terms like (please pardon my

   with military grade AES encryption - Hum, I'll have
 to ask NIST
 about that.

AES can be permitted for use in classified environments. See And, yes, the DoD
does use AES in certain circumstances.

  The encryption keys used to protect your data are generated
  in hardware by a FIPS 140-2 compliant True Random Number
 As opposed to a FIPS 140-2 compliant False Random Number Generator.

While I don't understand this quibble about standard terminology, I do
note that the IronKey language is somewhat misleading. There are no
FIPS-approved non-deterministic RNGs at this point, as all of the
FIPS-approved RNGs are deterministic (pseudo) RNGs. (See It
is possible to use a non-deterministic RNG to seed a FIPS-approved PRNG,
but I don't know of anyone in the FIPS 140-2 world that claims doing so
makes the non-deterministic RNG FIPS 140-2 compliant. 

(Also, if random data is utilized during key generation within a FIPS
140-2 module, then a FIPS-approved RNG must be utilized to generate that
data in order to meet FIPS 140-2 requirements. Since all the
FIPS-approved RNGs are PRNGs, a true RNG is not going to meet the FIPS
140-2 requirement here.)

Overall, colorful language and FIPS 140 hand-waving seem like the
marketing norm in the security products that utilize crypto world. I
think the language used by IronKey falls right in line with that, but I
don't get a sense of snake oil. Then again, I don't really care either.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

[cryptography] provable security

2007-08-09 Thread Pascal Junod (Mailing Lists)

It is worth reading:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: FIPS 140-2, PRNGs, and entropy sources

2007-07-16 Thread lists
On 9 Jul 2007 16:08:33 -0600, Darren Lasko wrote:
 2) Does FIPS 140-2 have any requirements regarding the quality of the
 entropy source that is used for seeding a PRNG?
 Yes.  The requirement imposed by FIPS 140-2
 are in section 4.7.2:
  Compromising the security of the key generation method (e.g., guessing
  the seed value to initialize the deterministic RNG) shall require as
  least as many operations as determining the value of the generated key.
 (which would apply to any RNG output that became a key)

 and in section 4.7.3:
  Compromising the security of the key establishment method (e.g.,
  compromising the security of the algorithm used for key establishment)
  shall require at least as many operations as determining the value of
  the cryptographic key being transported or agreed upon.
 (which would apply to any RNG output that is used in a security relevant
 way in a key establishment scheme)

  For whatever reason, I get asked FIPS 140 questions and this one about
FIPS 140-2 comes up on occasion. It is good someone finally asked in
public and received a public reply. A bit convoluted, and this says
nothing about seeding requirements for a PRNG not used for key
generation/agreement, but it is the logic of FIPS 140-2 with regards to
PRNG seeding.

 Again, good information.  However, it seems pretty nebulous about how
 they expect you to measure the number of operations required to
 compromise the security of the key generation method.  Do you know
 what kind of documentation the labs require?
 SP 800-90, Appendix C.3, states that the min-entropy method shall be
 used for estimating entropy, but this method only uses the
 probabilities assigned to each possible sample value.  I'm guessing
 that measuring ONLY the probabilities associated with each sample is
 insufficient for assessing your entropy source.  For example, if I
 obtain 1 bit per sample and I measure 50% 0's and 50% 1's, I have
 full entropy by that measure, even if my entropy source always
 produces 1010101010101010.
 Is the NIST Statistical Test Suite sufficient for evaluating your
 entropy source, and will the certification labs accept results from
 the STS as an assessment of the entropy source?

 From what I have seen, the labs understand what will pass muster with
NIST/CSE for FIPS 140-2 based on their experience with the many FIPS
140-2 validation efforts performed to this point, so they are a good
gauge of what NIST/CSE will smile upon here, even though there has been
little formal guidance. Most labs are fine with standard techniques for
gathering entropy from a system, such as polling various timings for
things like disk access, plus whitening, such as running the results of
the polling through a FIPS-approved hashing algorithm. Hardware RNGs,
such as a noise source, which can be used either as just another source
in the polling, or as the only source. When using a hardware RNG, most
vendors focus on this as the primary source of entropy, and labs will
often require many details about the hardware RNG as a result.

As far as what to provide, well, you need to describe how the PRNG is
seeded, give code pointers to the seeding and any entropy gathering
routines, include details on any hardware RNGs, and construct a general
rationale for why all this adds up to meeting the requirements. The labs
can take it from there and ask for more information as needed, such as
sample output from the entropy gathering routines to examine. If you are
concerned about not meeting the requirements, chatting with a lab or
consultant about what is required is not out of the question - it might
even provide some metric as to how friendly and responsive the team you
are considering working with for your validation will be.

FWIW, up to this point in time, I have rarely seen formal calculations
of entropy by vendors in the rationale for meeting these requirements
(those few times were mostly with vendors that built their own hardware
RNGs), and I have seen statistical tests used by vendors a little bit as
a part of the rationale behind meeting these requirements.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: can a random number be subject to a takedown?

2007-05-01 Thread lists

 A lot of sites have been getting DMCA takedowns for the HD-DVD
 processing key that got leaked recently.

 My question to the assembled: are cryptographic keys really subject to
 DMCA subject to takedown requests? I suspect they are not
 copyrightable under the criterion from the phone directory

I'm as far from being a copyright lowyer as most of you.

I suppose that we mean a randomly-generated number, rather than a random 
Then the production process would not be creative as expected for direct 
and you'd be right that it can't be copyrighted.

As far as the DMCA is concerned I think this is a paracopyright issue - the
(alleged) significance of the number in relation to HD-DVD would make it a
circumvention tool and therefore subject to takedowns.  I don't know whether an
alternative legitimate use is a defence, but you might have a job finding such
a thing for a randomly-generated number (as opposed to something more structured
like Netscape engineers are weenies.).

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: How important is FIPS 140-2 Level 1 cert?

2007-01-02 Thread lists
On 27 Dec 2006 14:10:10 -0500, Thor Lancelot Simon wrote:
 On Tue, Dec 26, 2006 at 05:36:42PM +1300, Peter Gutmann wrote:
 In addition I've heard of evaluations where the generator is required to use 
 monotonically increasing counter (clock value) as the seed, so you can't just
 use the PRNG as a postprocessor for an entropy polling mechanism.  Then again
 I know of some that have used it as exactly that without any problems.

I have never heard of the seed being required to use a clock value,
and, under FIPS 140-2, using only a clock value to seed a PRNG is not
going to pass the key management requirements.

 This (braindamaged) requirements change was brought in by the creation of
 a Known Answer Test for the cipher-based RNG.  Prior to the addition of
 that test, one could add additional entropy by changing the seed value at
 each iteration of the generator.  But that makes it, of course, impossible
 to get Known Answers that confirm that the generator actually imlements
 the standard.  So suddenly the alternate form of the generator -- in my
 opinion much less secure -- which uses a monotonically-increasing counter
 for the seed, was the only permitted form.

Now we are talking about something different, the date/time (DT)
vector of the X9.31 PRNG, which is not the seed of the X9.31 PRNG.

I don't think anything changed with the introduction of a power-up
PRNG KAT or PRNG algorithm testing. Even the NIST-defined PRNGs, which
are based upon the X9.31 PRNG, are open in regard to what an
implementer chooses to use as the DT vector.

Take the power-up PRNG KAT. By definition, this requires the use of
known values, which means, for an X9.31 PRNG, even the DT vector needs
to be set to a known value. Adding in entropy violates this
requirement, but, after the KAT is performed (and passed), the PRNG is
required to be seeded with real data before assuming its normal
operational state.

Take algorithm testing. This testing too requires the use of known
values, in this case provided by a testing lab, which are run through
an implementation to produce results that can then be verified by a
testing lab. For an X9.31 PRNG, this testing requires access to a
parameter (DT) of the X9.31 PRNG that may not normally be accessible
outside of the PRNG. It is just fine to have a test mode to provide
this access, and then the counter woes can be made to go away as part
of this test mode. (Note: Algorithm testing is once off testing
performed by the vendor and not normally deployed in a product.)

 I have yet to hear of anyone who has found a test lab that will certify
 a generator implementation that uses the mono counter for the KAT suite
 but a random seed in normal operation.  For good reason, labs are usually
 very leery of algorithm implementations that come with a special test

It seems to me that an implementation of an X9.31 PRNG without a test
mode makes no sense, for the reasons cited above. This mode would be
used in a self-testing state, or during algorithm testing, but not in
normal operational state.

As an example of a module with public source code, OpenSSL made it
through FIPS validation doing this. Their X9.31 PRNG normally uses
clock data for DT, but, in test mode, DT can be set from the outside
instead. This test mode is utilized for the power-up KAT as well as
algorithm testing. I can imagine this is not a unique occurrence.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: How important is FIPS 140-2 Level 1 cert?

2006-12-27 Thread lists
On 22 Dec 2006 11:43:58 -0500, Perry E. Metzger wrote:
 [I was asked to forward this anonymously. --Perry]
 From: [Name Withheld]
 Subject: Re: How important is FIPS 140-2 Level 1 cert?
 Paul Hoffman [EMAIL PROTECTED] wrote:
 At 11:25 AM -0500 12/21/06, Saqib Ali wrote:
 If two products have exactly same feature set, but one is FIPS 140-2
 Level 1 certified but cost twice. Would you go for it, considering the
 Level 1 is the lowest.
 Assuming that the two products use Internet protocols (as compared to
 proprietary protocols): no. Probably the only thing that could
 differentiate the two is if the cheaper one has a crappy random number
 generator, the more expensive one will have a good one.
 Actually you cant even guarantee that because the FIPS 140 requirements
 for the ANSI X9.17/X9.31 PRNG include a pile of oddball things that made
 sense for the original X9.17 use (where it was assumed the only source
 of entropy was a DES3 key embedded in secure hardware) but are severe
 restrictions on current implementations. As a result a FIPS 140-
 certified key generator will be worse than a well-designed non-FIPS-140
 one because the FIPS requirements prevent you from doing several things
 that would improve the functioning like injecting extra entropy into the
 generator besides the DES3 key. In addition since no two eval labs can
 agree on exactly what is and isnt OK here its pretty much a crap-shoot
 as to what you can get through. Ive heard stories from different vendors
 of Lab B disallowing something that had already been certified by Lab A
 in a previous pass through the FIPS process.

These statements are not entirely correct for FIPS 140-2 under current 
interpretations (to my understanding, at least).

For example,

ANSI X9.31 is not the only FIPS-approved PRNG. There are a few of them [1], 
although most of them are closely related.

You can reseed a FIPS-approved PRNG all you want, which means you could even 
effectively reduce a FIPS-approved PRNG to a whitener if you
desired. (There are some caveats here.) IIRC, yarrow takes this sort of 
approach. Also, as noted elsewhere, some of the PRNGs have explicit
mechanisms for you to feed in more entropy.

For an X9.31 PRNG, (re)seeding can include (re)keying the 2-key TDES and often 
does as implementers try to cram as much gathered entropy
into the PRNG as possible.

Also, lab interpretations of requirements can vary a bit in some of the more 
ambiguous areas, especially on the things like the stretches
made for validating software, or with novel implementations; however, overall, 
they are quite similar. I think the bigger lab variations
revolve around things like the staff on hand , which can effect, for example, 
how quickly a lab can understand a product, how effectively a
lab can interpret that understanding against the requirements, and how much 
bandwidth the lab has to just get the job done.

 In terms of its value, particularly for level 1, what itll give you is
 (1) protection from egregiously bad implementations (which a quick
 source code check will do as well) and (2) the ability to sell to US
 federal agencies. Beyond that I concur that 10 minutes of interop
 testing with the standardised protocol of your choice (e.g. TLS, S/MIME,
 IPsec) will give you more than FIPS 140 will since a run of TLS tests
 much more of the crypto than FIPS 140 does.

As to the original direction of this thread, I agree with these value adds. 
Point two is the main reason anyone pursues FIPS 140-2 at this
point, although this might change with the internationalization efforts. As to 
point one, it is worth noting that many vendors have not had
anyone review their crypto up until going through a FIPS 140-2 validation. And, 
FIPS 140-2 looks at the overall picture more than just
particular protocols, by including requirements in areas like the whole of key 
management, and authentication and authorization (noting this
applies at level 2). So, even at level 1, this can mean a fair amount of real 
problems being discovered and resolved as a result of the
process. (Of course, if you require stronger levels of third party review of a 
product, then higher levels of FIPS 140-2 validation or
perhaps other programs like NSA certifications are more what you should be 
looking at.) And, to add a third value, FIPS 140-2 validation can
function much like having letters like CISSP following one's name.

With regards to protocol interop, as part of the FIPS 140-2 process currently, 
algorithm testing is involved. Perhaps protocol testing
requirements should be included in the next revision of the standard being 
worked on at the moment [2]. I do agree this would help to
strengthen the FIPS 140 process.

(Reading this overall thread, it seems FIPS 140 is still considered some sort 
of voodoo. The FIPS 140-2 requirements are public [3], and
even the ambiguous areas have standard interpretations at this point.)



Re: classical crypto programmatic aids

2006-06-29 Thread lists


 Does anyone here know of any computer-based aids for breaking
 classical cryptosystems?  I'm thinking in particular of the ones in
 Body of Secrets, which are so short that I really hope they're
 monoalphabetic substitutions.  But I'm interested in these sorts of
 programs more generally.  I could use paper, but it'd be nice if a
 computer could keep track of what I've tried and otherwise ruled out.
 I am aware of the crypt breaker's workbench, but that's specific to
 classic Unix crypt(3).  What else is there?

In the 1990s Remo Pini (Pini Computer Trading, Switzerland) was distributing
a crypto CD with such things on it.  At the time his address was [EMAIL 
and google suggests some more recent addresses.
 Incidentally, if anyone's interested, on my web page I have an article
 on how I used classical techniques to recover files encrypted with CFS

I thought that was interesting and it's living in my magazine pile.

GCHQ issue a puzzle occasionally (looks like twice a year)
and I tackled the December 2004 one
like this
using a program of Paul Leyland's off the Pini CD.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Pseudonymity for tor: nym-0.1 (fwd)

2005-10-07 Thread lists
From: Bill Frantz [EMAIL PROTECTED]

 system, for example, recognition of the number on an image. In fact,

 This solution is subject to a rather interesting attack, which to my
 knowledge has not yet been named, although it is occasionally used

Stealing Cycles from Humans is the name I know for it.
I'm unsure, but this may be the first use.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: European country forbids its citizens from smiling for passport photos

2005-09-17 Thread lists

From: William Allen Simpson [EMAIL PROTECTED]
 Do you really need to click on this link to know which one it is?

Which one it is depends what the meaning of one is.

Announced in multiple news sources last year:

This page mentions international requirements.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: The cost of online anonymity

2005-09-11 Thread lists

From: R.A. Hettinga [EMAIL PROTECTED]

   Digital evidence expert at the London School of Economics, Peter Sommer
  says: A few years ago I was very much in favour of libertarian computing.
   What changed my mind was the experience of acting in the English courts
  as a computer expert and examining large numbers of computers from really
  nasty people, who were using precisely the same sort of technology in order
  to conceal their activities.

Assuming someone has come under suspicion in some other way and that they
continue to use a computer to view illegal material wouldn't the likes of
TEMPEST, hidden cameras and tampering with the suspect's software provide
all the computer-based evidence necessary ?

Combine that with a raid thats finds only one person in the house at the time
and what more do you need ?  I think it should be possible to debunk the idea
of lawlessness expressed in the article.

There is also this mail from (I think the same) Mr Sommer
that mentions wider goals, but even these may be tackled to some extent
by observations like thoe above.   Especially (in the absence of Trusted 
and amended version of Freenet s/w that produces concealed logs.

I suppose some estimate of the number of really nasty people, of Freenet users
and the cost of investigating this way would be good to have.

According to this article
there's an attempt to speed up Operation Ore (and I think all will agree it
needs it).

   Peter Sommer says: Ian [Clarke] is placing a powerful tool in the hands
  of other people. He's like an armaments manufacturer.

Should we see as virtual armaments all encryption software, digital cameras,
CD burners etc ?  And if not where should the line be drawn ?

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: encrypted tapes

2005-06-09 Thread lists

From: Perry E. Metzger [EMAIL PROTECTED]

 It is worse than that. At least one large accounting company sends new
 recruits to a boot camp where they learn how to conduct security
 audits by rote. They then send these brand new 23 year old security
 auditors out to conduct security audits, with minimal supervision
 from a partner or two. The audits are inevitably of the lowest
 possible quality -- they run automated security scanners no better

The worst security audit point I have ever seen came from KPMG and
said that logging on as a particular non-root unix account got root
access, based on the WARNING! YOU ARE SUPERUSER message seen at login.
What they'd never done was check something like sum /etc/shadow to
see whether it was permitted or denied, nor run id or similar checks.
So when this user's home directory is absent and he ends up using
/ and /.profile (where the warning was in an echo statement) he gets
this message on the screen.  So where they should be writing
misleading warning in some circumstances they write root access
immediately available for common users.

I'm planning to teach a class of 5 existing internal auditors
next month on some security s/w and I am going to include:
   - focussing on the more important stuff
 (a long-running problem where I work)
   - you must prove it before you can report it
   - you must be able to state what is wrong with the observed state;
 usually expressed as the policy point(s) violated
 (just appearing in scanner output is not enough)
   - you should have some idea of one way reasonable way to fix it

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: encrypted tapes (was Re: Papers about Algorithm hiding ?)

2005-06-09 Thread lists

From: Charles M. Hannum [EMAIL PROTECTED]

 I can name at least one obvious case where sensitive data -- namely credit 
 card numbers -- is in fact something you want to search on: credit card 
 billing companies like CCbill and iBill.  Without the ability to search by 
 CC#, customers are pretty screwed.

Is there a good reason for not searching by the hash of a CC# ?
I think the author is planning further work on this site and
would be happy to receive constructive comments.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Is finding security holes a good idea?

2004-06-13 Thread lists
From: Eric Rescorla [EMAIL PROTECTED]

Is finding security holes a good idea?

In section 1 there's a crucial phrase not properly followed up:
significant opportunity cost since these researchers
 could be doing other security work instead of finding
What other security work is being used for comparison ?
- finding and fixing non-program flaws (such as in configuration)
  I do a lot of this - and I'm not about to run out of it.
  I know that _finding_ the flaws is easy.  Even finding
  many of them systematically is easy.  Fixing them is
  often gets stuck on the problem of that's Fred's piece
  of work and he doesn't feel like doing it.
- fixing long-known and neglected bugs (are there many ?)
- accelerating patch uptake
- technical work on tolerant architectures/languages etc
- advocacy work on tolerant architectures/languages etc
  (Where's Howard Aitken when you need him ?)
- forensics
- other ?

Footnote 1 mentions an indirect effect of vulnerability research.
Another one would be programmer education - but reporting yet
another bug of a common type seems to have low value.  People
do need to be aware that (their!) software can be faulty and in
roughly what ways.

In 3.4 if proactive WHD is not worth the effort because the bugs
get discovered anyway when they are widely exploited what does
this say about finding vulnerabilities through their use in the
wild ? Is this more costly but better aimed at the bugs that matter ?
Are there cost-effective ways to do this reactive discovery ?  What
tools would simplify it ?

As mentioned in 8.4 the estimates of total cost bypass the incentive
to the participant.  A vendor/maintainer running his own code
is a special case because he might upgrade all his installations
before announcing a fix.

Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats. --Howard Aitken

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Reliance on Microsoft called risk to U.S. security

2003-10-02 Thread lists

 Heh. You looked at my mail headers, didn't you?  Yes, I use pine -
 primarily *because* of that property.  It treats all incoming messages
 as text rather than live code.

BUGTRAQ in the last 3 years lists over 80 mails on pine - including
reference to this recently:
which also appears in candidates on
(Mitre seem to take unreasonable time in converting candidates to
definite problems unless I'm misunderstanding their website.)

 [HTML mail] can cause your machine, specifically, to make network
 connections to get graphics, style sheets, etc, and will not display

That could include web bugs for spammers.  I agree it's ridiculous to
read mail in a browser but a conventional MUA has risks too.

I write all mail to disk and view it with my favourite text editor.
This is convenient with practice.  Now I only want MUAs for sending
mail (it's worth it to get the correct references in my reply headers).

I use this script on one of my accounts where I accept HTML mail
(reluctantly from a hotmail user).
The HTML conversion is done by lynx (confined by SubDomain).

This practice can result in running mimencode -u and metamail -w
on a few things.  It's not that common for a non-text message to get
past my procmail rules and have me choose to read it.

This is all pretty simple but certainly not mass-market.  I must order a
told you so rubber stamp for when my monocultural acquaintances get hacked.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]