Re: Full Disk Encryption solutions selected for US Government use

2007-10-10 Thread lists
On 8 Oct 2007 10:12:58 -0700, Stephan Somogyi wrote:
 At 02:11 +1300 09.10.2007, Peter Gutmann wrote:
 
 But if you build a FDE product with it you've got to get the entire product
 certified, not just the crypto component.
 
 I don't believe this to be the case.
 
 FIPS 140(-2) is about validating cryptographic implementations. It is 
 not about certifying entire products that contain ample functionality 
 well outside the scope of cryptographic evaluation. That's more of a 
 Common Criteria thing.

Yes, but an FDE product built on the OSSL FIPS module would not likely
meet the FIPS 140-2 check box, as there is potentially more FIPS 140-2
relevant functionality in the FDE product beyond what was validated in
the OSSL module, such as, say, the whole key life cycle for the FDE
product. That does not necessarily mean all of the FDE product is FIPS
relevant, so perhaps the FIPS relevant functionality in the FDE product
could be self-contained and validated by itself, or perhaps the whole
FDE product could be validated and the irrelevant functionality just
excluded from the FIPS requirements, etc. (As Gutmann says though,
vendors sometimes successfully employ a bit of hand-waving here, so you
never quite know what will satisfy the FIPS check box.)

 At 14:04 +0100 08.10.2007, Ben Laurie wrote:
 
 ? OpenSSL has FIPS 140.
 
 OpenSSL FIPS Object Module 1.1.1 has FIPS 140-2 when running on SUSE 
 9.0 and HPUX 11i, according to
 
 http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/1401val2007.htm#733
 
 In the context of a conversation about whether something formally has 
 FIPS validation or not, the details are important.

Yes, the details are important. The OSSL FIPS module was tested on those
platforms, but is vendor affirmed on other platforms, assuming the
module meets the vendor affirmation requirements described in the FIPS
140-2 implementation guidance on a given other platform.

-Andrew

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Full Disk Encryption solutions selected for US Government use

2007-10-10 Thread Leichter, Jerry
| A slightly off-topic question:  if we accept that current processes
| (FIPS-140, CC, etc) are inadequate indicators of quality for OSS
| products, is there something that can be done about it?  Is there a
| reasonable criteria / process that can be built that is more suitable?
Well, if you believe a talk by Brian Snow at the NSA - see
http://www.acsac.org/2005/papers/Snow.pdf - our whole process has to
change to get assurance, from the beginnings of the design all the
way through the final product.

I suspect he's right - but I'm also pretty sure that the processes
involved will always be too expensive for most uses.  They'll even be
too expensive for the cases where you'd think they best apply - e.g.,
in protecting large financial transactions.  An analysis of the costs
vs. the risks will usually end up with the decision to spend less and
spread the risks around, whether through insurance or higher rates
or other means.

We keep being told that inspection after the fact will give us more
secure systems.  It never seems to work.  You'd think that the
experience of, say, the US auto industry - which was taught by the
Japanese that you have to build quality into your entire process, not
inspect *out* lack of quality at the end - would give us some hint
that after-the-fact inspection is not the way to go.

Given all that ... a FIPS 140-2 certification is actually a pretty
reasonable evaluation.  It can be because it's trying to deal with
a problem that can be constrained to a workable size.  You know what's
supposed to go in; you know what's supposed to come out.  (This
still works better for hardware than for software, though.)  Where
FIPS 140-2 breaks down is that ultimately all it can tell you is
that some constrained piece of the system works.  But it tells you
nothing, and *can* tell you nothing, about whether that piece is
being used in a proper, secure way.  (Again, this is somewhat easier
with hardware, because the system boundaries are much more sharply
defined - and because of the inflexibility of hardware, they are also
much smaller.)  Beyond this is Common Criteria, which can easily be
more about paperwork than anything real.

Until someone comes up with a new way to approach the problem, my
guess is that we'll see more stuff moved into hardware, with limited
security definitions above the hardware that we can have some faith
in - but as little of real value to be said above that as there is
today.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Full Disk Encryption solutions selected for US Government use

2007-10-10 Thread james hughes


On Oct 8, 2007, at 4:27 AM, Steven M. Bellovin wrote:


On Mon, 18 Jun 2007 22:57:36 -0700
Ali, Saqib [EMAIL PROTECTED] wrote:


US Government has select 9 security vendors that will product drive
and file level encryption software.

See:
http://security-basics.blogspot.com/2007/06/fde-fde-solutions-selected-for-us.html
OR
http://tinyurl.com/2xffax



Out of curiousity, are any open source FDE products being evaluated?

--Steve Bellovin, http://www.cs.columbia.edu/~smb


Out of curiousity, Vista (BitLocker) was not mentioned?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Trillian Secure IM

2007-10-10 Thread Alex Pankratov
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Leichter, Jerry
 Sent: Monday, October 08, 2007 11:48 AM
 To: Alex Pankratov
 Cc: cryptography@metzdowd.com
 Subject: RE: Trillian Secure IM
 
 |  But, opportunistic cryptography is even more fun.  It is 
 |  very encouraging to see projects implement cryptography in 
 |  limited forms.  A system that uses a primitive form of 
 |  encryption is many orders of magnitude more secure than a 
 |  system that implements none.
 | 
 | Primitive form - maybe, weak form - absolutely not. It 
 | is actually worse than having no security at all, because 
 | it tends to create an _illusion_ of protection. 

 This is an old argument.  I used to make it myself.  I even used
 to believe it.  Unfortunately, it misses the essential truth:  
 The choice is rarely between really strong cryptography and weak 
 cryptography; it's between weak cryptography and no cryptography 
 at all. What this argument assumes is that people really *want* 
 cryptography; that if you give them nothing, they'll keep on asking 
 for it; but if you give them something weak, they'll stop asking 
 and things will end there.  But in point of fact hardly anyone 
 knows enough to actually want cryptography. Those who know enough 
 will insist on the strong variety whether or not the weak is 
 available; while the rest will just continue with whatever they 
 have.

Well, I view it from a slightly different perspective. 

Even the most ignorant person knows a difference between 
the privacy and the lack of thereof. Cryptography or not. 
Therefore, if he is being told that A offers a privacy, 
it may lead this person to assume the level of this 
privacy protection is adequate ... simply because if it 
weren't, it wouldn't be offered. Needless to say that
this sort of an assumption in case of a weak crypto is
dangerous.

When there's a choice between no and weak protection, I am 
of course in favour of latter *if* it is clearly labeled as 
weak.

 | Which is by the way exactly the case with SecureIM. How 
 | hard is it to brute-force 128-bit DH ? My guesstimate
 | is it's an order of minutes or even seconds, depending
 | on CPU resources.

 It's much better to analyze this in terms of the cost to 
 the attacker and the defender.

Yup, I am familiar with the methodology. My point was that
128bit DH is breakable in terms of the people from those
forum's threads.

Alex

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: kernel-level key management subsystem

2007-10-10 Thread Peter Gutmann
[EMAIL PROTECTED] writes:
On Mon, May 21, 2007 at 01:44:23PM +1200, Peter Gutmann wrote:
 Ignoring special-purpose hardware, does anyone have thoughts on what the
 requirements for a kernel-level key management subsystem should be?

 Yes, but first you'd have to tell me what you're trying to do.

Protect keys in kernel land rather than userland.

Allows for things like e.g.
1) marking memory unpageable (avoiding swap hazard)
2) relocating the data to different physical pages to prevent
   burn-in
3) secure wiping

OK, those are all pretty trivial in terms of having an identified problem to
solve.

4) providing a common system for storing and protecting them
   rather than doing it in each individual application
5) allowing for them to be shared securely among processes (like
   ssh-agent and gpg-agent)
6) provide protection against userland snooping
   programs (gdb anyone?)
etc.

Right, and that's what I wanted a definition for.  95% of the what you're
asking for is defining the problem, and that's what I was after.  For example
how do you want access to the keys controlled?  ACLs?  Who sets the ACLs?  Who
can manage them?  How are permissions managed?  What's the UI for this?  Under
what conditions is sharing allowed?  If sharing is allowed, how do you handle
the fact that different apps (with different levels of security) could have
access to the same keys?  Do you derive keys from a master key?  Do you
migrate portions of the app functionality into the kernel to mitigate the
problems with untrusted apps?  How is key backup handled?  What about

[Another 5 pages of questions]

Once you've got a clear statement of exactly what you want to do (which in its
most abstract form is solve an arbitrarily complex key management problem),
implementation is almost trivial in comparison.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fixing the current process

2007-10-10 Thread Paul Hoffman

At 10:55 PM +0200 10/8/07, Ian G wrote:
A slightly off-topic question:  if we accept that current processes 
(FIPS-140, CC, etc) are inadequate indicators of quality for OSS 
products, is there something that can be done about it?


Highly doubtful. The institutional inertia at DoD/NIST is too great. 
It has been suggested numerous times by numerous concerned parties 
for at least a decade.


Is there a reasonable criteria / process that can be built that is 
more suitable?


Yes. That part is easy, and some people in the system admit designing 
a much better system is quite tractable, as long as it is done in a 
vacuum. However, bureaucracy abhors a vacuum.


My feeling is that the only way that we can overturn the silliness of 
FIPS-140 / CC is for a major defense ally to implement a sane system. 
Five years later, and with a lot of vendor push, it could become a 
third process and the other two could wither over the ensuing 
decades. If we're lucky.



--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 307 digit number factored

2007-10-10 Thread travis+ml-cryptography
On Mon, May 21, 2007 at 04:32:10PM -0400, Victor Duchovni wrote:
 On Mon, May 21, 2007 at 02:44:28PM -0400, Perry E. Metzger wrote:
  My take: clearly, 1024 bits is no longer sufficient for RSA use for
  high value applications, though this has been on the horizon for some
  time. Presumably, it would be a good idea to use longer keys for all
  applications, including low value ones, provided that the slowdown
  isn't prohibitive. As always, I think the right rule is encrypt until
  it hurts, then back off until it stops hurting...
 
 When do the Certicom patents expire? I really don't see ever longer RSA
 keys as the answer, and the patents are I think holding back adoption...

They already expired.

Some EC primitives in the latest OpenSSL.

But why assume short ECC keys are stronger than long RSA?

AFAIK, the only advantage of ECC is that the keys are shorter.
The disadvantage is that it isn't as well studied.

Although every time I read up on ECC, I understand it, and then within
a few days I don't remember anything about it.  I think they teflon
coated those ideas somehow, because they don't stick.

 With EECDH one can use ECDH handshakes signed with RSA keys, but that
 does not really address any looming demise of 1024 bit RSA.

Why can't they do something like El-Gamal?

I'm not comfortable with RSA somehow.  It seems fundamentally more
complicated to me than DLP, and it's hard to get right - look at how
many things there are in the PKCS for it.
-- 
URL:http://www.subspacefield.org/~travis/ Eff the ineffable!
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpBNtfcR3SYr.pgp
Description: PGP signature


Re: kernel-level key management subsystem

2007-10-10 Thread travis+ml-cryptography
On Tue, Oct 09, 2007 at 06:08:44PM +1300, Peter Gutmann wrote:
 how do you want access to the keys controlled?  ACLs?  Who sets the ACLs?  Who
 can manage them?  How are permissions managed?  What's the UI for this?  Under
 what conditions is sharing allowed?  If sharing is allowed, how do you handle
 the fact that different apps (with different levels of security) could have
 access to the same keys?  Do you derive keys from a master key?  Do you
 migrate portions of the app functionality into the kernel to mitigate the
 problems with untrusted apps?  How is key backup handled?  What about
 
 [Another 5 pages of questions]

Good stuff.

I was hoping perhaps to stimulate a discussion on just these sorts of issues.

There's a bit of interrelated stuff here; you can start with requirements,
postulate some mechanisms, think about implications of their implementation,
which leads to refining requirements.  It's sure to be a learning experience.

Maybe this isn't the best place to do that, but it seems to me that this group
would be one of the best for ironing out the details, and would have a vested
interest in any such management interface not suck.

Ideally I'd like to be able to develop something for, say, Linux, and possibly
integrate it with your open-source co-processor stuff.

 Once you've got a clear statement of exactly what you want to do (which in its
 most abstract form is solve an arbitrarily complex key management problem),
 implementation is almost trivial in comparison.

Sure.

Maybe that's a good question: what are the idioms in key management?

Is there any similar work already that I could read up on?

Where can I read up on current HSM functionality, offerings, features, etc.?

  Computers are useless; they can only give answers.
   -- Pablo Picasso
-- 
URL:http://www.subspacefield.org/~travis/ Eff the ineffable!
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpRDG3MxsVBo.pgp
Description: PGP signature


Re: Trillian Secure IM

2007-10-10 Thread ji


Why bother with all this? There is OTP for gaim, and it works just fine 
(not to mention it comes from a definitely clueful source).


/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Trillian Secure IM

2007-10-10 Thread ji

[EMAIL PROTECTED] wrote:


Why bother with all this? There is OTP for gaim, and it works just fine 
(not to mention it comes from a definitely clueful source).


/ji



I meant, of course, OTR (off-the-record).  And to think that I was using 
it in another window as I was typing this!


Thanks to Scott G. Kelly for pointing this out.

/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 307 digit number factored

2007-10-10 Thread Nate Lawson
[EMAIL PROTECTED] wrote:
 On Mon, May 21, 2007 at 04:32:10PM -0400, Victor Duchovni wrote:
 On Mon, May 21, 2007 at 02:44:28PM -0400, Perry E. Metzger wrote:
 My take: clearly, 1024 bits is no longer sufficient for RSA use for
 high value applications, though this has been on the horizon for some
 time. Presumably, it would be a good idea to use longer keys for all
 applications, including low value ones, provided that the slowdown
 isn't prohibitive. As always, I think the right rule is encrypt until
 it hurts, then back off until it stops hurting...
 When do the Certicom patents expire? I really don't see ever longer RSA
 keys as the answer, and the patents are I think holding back adoption...
 
 They already expired.

Not true (counterexample: ECMQV).

 Some EC primitives in the latest OpenSSL.

Because various standard forms of EC were never covered by patents.
This has been rehashed many times, for example:
http://www.xml-dev.com/pipermail/fde/2007-July/000450.html

 But why assume short ECC keys are stronger than long RSA?
 
 AFAIK, the only advantage of ECC is that the keys are shorter.
 The disadvantage is that it isn't as well studied.

Again, this is well covered.  The reason is the fundamental difference
in the performance of the best-known attacks (GNFS vs. Pollard's rho).
http://www.vaf.sk/download/keysize.pdf

Also, EC public operations are typically faster than private, although
not on the order of the difference between RSA public and private ops.

 Although every time I read up on ECC, I understand it, and then within
 a few days I don't remember anything about it.  I think they teflon
 coated those ideas somehow, because they don't stick.
 
 With EECDH one can use ECDH handshakes signed with RSA keys, but that
 does not really address any looming demise of 1024 bit RSA.
 
 Why can't they do something like El-Gamal?
 
 I'm not comfortable with RSA somehow.  It seems fundamentally more
 complicated to me than DLP, and it's hard to get right - look at how
 many things there are in the PKCS for it.

The RSA or EC primitives are *not* usable cryptographic schemes by
themselves, thus it isn't fair to compare them this way (RSA+PKCS#1 !=
EC point multiplication).

ECDSA, for example, is intentionally constrained to be signing-only and
the hash signed is a fixed size.  It's more fair to compare RSA+PKCS#1
with EC+DSA/DH.  In that sense, I think the complexity of implementation
is similar.

I'm not saying that one of these schemes is better than the other.  They
each have their own tradeoffs.  I just object to your methodology of
claiming RSA is fundamentally more problematic than EC.

-- 
Nate

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]