Re: questions about RNGs and FIPS 140

2010-08-26 Thread Steven Bellovin

On Aug 25, 2010, at 4:37 16PM, travis+ml-cryptogra...@subspacefield.org wrote:

 
 3) Is determinism a good idea?
 See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
 regulations require non-determinism for obvious reasons.

It's worth noting that the issue of determinism vs. non-determinism is by no 
means clearcut.  You yourself state that FIPS 140-2 requires deterministic 
PRNGs; I think one can rest assured that the NSA had a lot of input into that 
spec.  The Clipper chip programming facility used a PRNG to set the unit key -- 
and for good reasons, not bad ones.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-26 Thread Nicolas Williams
On Thu, Aug 26, 2010 at 12:40:04PM +1000, James A. Donald wrote:
 On 2010-08-25 11:04 PM, Richard Salz wrote:
 Also, note that HSTS is presently specific to HTTP. One could imagine
 expressing a more generic STS policy for an entire site
 
 A really knowledgeable net-head told me the other day that the problem
 with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
 are now more prohibitive than the crypto costs.  I was quite surprised to
 hear this; he was stunned to find it out.

It'd help amortize the cost of round-trips if we used HTTP/1.1
pipelining more.  Just as we could amortize the cost of public key
crypto by making more use of TLS session resumption, including session
resumption without server-side state [RFC4507].

And if only end-to-end IPsec with connection latching [RFC5660] had been
deployed years ago we could further amortize crypto context setup.

We need solutions, but abandoning security isn't really a good solution.

 This is inherent in the layering approach - inherent in our current
 crypto architecture.

The second part is a correct description of the current state of
affairs.  I don't buy the first part (see below).

 To avoid inordinate round trips, crypto has to be compiled into the
 application, has to be a source code library and application level
 protocol, rather than layers.

Authentication and key exchange are generally going to require 1.5 round
trips at least, which is to say, really, 2.

Yes, Kerberos AP exchanges happen in 1 round trip, but at the cost of
requiring a persistent replay cache (and also there's the non-trivial
TGS exchanges as well).  Replay caches historically have killed
performance, though they don't have to[0], but still, there's the need
for either a persistent replay cache backing store or a trade-off w.r.t.
startup time and clients with slow clocks[0], and even then you need to
worry about large (1s) clock adjustments.

So, really, as a rule of thumb, budget 2 round trips for all crypto
setup.  That leaves us with amortization and piggy-backing as ways to
make up for that hefty up-front cost.

 Every time you layer one communication protocol on top of another,
 you get another round trip.
 
 When you layer application protocol on ssl on tcp on ip, you get
 round trips to set up tcp, and *then* round trips to set up ssl,
 *then* round trips to set up the application protocol.

See draft-williams-tls-app-sasl-opt-04.txt [1], a variant of false
start, which alleviates the latter.  See also draft-bmoeller-tls-
falsestart-00.txt [2].

Back to layering...

If abstractions are leaky, maybe we should consider purposeful
abstraction leaking/piercing.

There's no reason that we couldn't piggy-back one layer's initial message
(and in some cases more) on a lower layer connection setup message
exchange -- provide much care is taken in doing so.

That's what PROT_READY in the GSS-API is for, that's one use for GSS-API
channel binding (see SASL/GS2 [RFC5801] for one example).  It's what TLS
false start proposals are about...  draft-williams-tls-app-sasl-opt-04
gets an up to 1.5 round-trip optimization for applications over TLS.

We could apply the same principle to TCP... (Shades of the old, failed?
transaction TCP [RFC1644] proposal from the mid `90s, I know.  Shades
also of TCP-AO and other more recent proposals perhaps as well.)

But there is a gotcha: the upper layer must be aware of the early
message send/delivery semantics.  For example, early messages may not
have been protected by the lower layer, with protection not confirmed
till the lower layer succeeds, which means... for example, that the
upper layer must not commit much in the way of resources until the lower
layer completes (e.g., so as to avoid DoS attacks).

I'm not saying that piercing layers is to be done cavalierly.  Rather,
that we should consider this approach, carefully.  I don't really see
better solutions (amortization won't always help).

Nico

[0] Turns out that there is a way to optimize replay caches greatly, so
that an fsync(2) is not needed on every transaction, or even most.

This is an optimization that turned out to be quite simple to
implement (with much commentary), but took a long time to think
through.  Writing a test program and then using it to test the
implementation's correctness was the lion's share of the
implementation work.

You can see it here:


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c

Diffs:


http://src.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c?r2=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%4012192%3Ab9153e7686cfr1=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%407934%3A6aeeafc994de

RFE (though IIRC the description is wrong/out of date):


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Jerry Leichter
On Aug 25, 2010, at 4:37 PM, travis+ml-cryptogra...@subspacefield.org  
wrote:


I also wanted to double-check these answers before I included them:

1) Is Linux /dev/{u,}random FIPS 140 certified?
No, because FIPS 140-2 does not allow TRNGs (what they call non- 
deterministic).  I couldn't tell if FIPS 140-1 allowed it, but FIPS  
140-2 supersedes FIPS 140-1.  I assume they don't allow non- 
determinism because it makes the system harder to test/certify, not  
because it's less secure.
No one has figured out a way to certify, or even really describe in a  
way that could be certified, a non-deterministic generator.



3) Is determinism a good idea?
See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
regulations require non-determinism for obvious reasons.


IPS doesn't tell you how to *seed* your deterministic generator.  In  
effect, a FIPS-compliant generator has the property that if you start  
it with an unpredictable seed, it will produce unpredictable values.   
Debian's problem was that it violated the if condition.  The  
determinism of the algorithm that produced subsequent values wasn't  
relevant.



4) What about VMs?
Rolling back a deterministic RNG on those systems gives the same
values unless/until you re-seed with something new to this iteration.


I'm not sure what you mean by rolling back.  Yes, if you restart any  
deterministic RNG with a previously-used internal state, it will  
generate the same stream it did before.  This is true whether you are  
in a VM or not.


RNG's in VM's are a big problem because the unpredictable values  
used in the non-deterministic parts of the algorithms - whether you  
use them just for seeding or during updating as well - are often much  
more predictable in a VM than a real machine.  (For example, disk  
timings on real hardware have some real entropy, but in a VM with an  
emulated disk, that's open to question.)


We had a long discussion on this list a couple of weeks back which  
came to the conclusion that a hidden, instance-specific state, saved  
across reboots; combined with (fairly minimal) entropy at boot time;  
was probably a very good way to go.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread dan

  
  as previously mentioned, somewhere back behind everything else ... there
  is strong financial motivation in the sale of the SSL domain name digital
  certificates.
  

While I am *not* arguing that point, per se, if having a
better solution would require, or would have required, no
more investment than the accumulated profits in the sale
of SSL domain name certs, we could have solved this by now.

--dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Transport-level encryption with Tcpcrypt

2010-08-26 Thread Sean McGrath
From http://lwn.net/Articles/400913/

Transport-level encryption with Tcpcrypt
By Jake Edge
August 25, 2010

It has been said that the US National Security Agency (NSA) blocked the
implementation of encryption in the TCP/IP protocol for the original
ARPANET, because it wanted to be able to listen in on the traffic that
crossed that early precursor to the internet. Since that time, we have
been relegated to always sending clear-text packets via TCP/IP. Higher
level application protocols (i.e. ssh, HTTPS, etc.) have enabled
encryption for some traffic, but the vast majority of internet
communication is still in the clear. The Tcpcrypt project is an attempt
to change that, transparently, so that two conforming nodes can encrypt
all of the data portion of any packets they exchange.

snip

http://tcpcrypt.org/

-- 
Sean McGrath
s...@manybits.net

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Ian G

On 25/08/10 11:04 PM, Richard Salz wrote:

A really knowledgeable net-head told me the other day that the problem
with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
are now more prohibitive than the crypto costs.  I was quite surprised to
hear this; he was stunned to find it out.


Yes, it is inherent in the design assumptions of the early 1990s.  At 
the time, the idea was to secure HTTP, which was (is) a request-response 
protocol layered over TCP.  Now, some of the design features that the 
designers settled on were:


+ ignore HTTP and secure TCP
+ make SSL look just like TCP
+ third-party authority authentication
+ no client-side caching of certs

And those features they delivered reasonably well.

However, if they had dug a bit deeper at the time (unlikely, really 
unlikely) they would have discovered that the core HTTP protocol is 
request-response, which means it is two packets, one for request and one 
for response.


Layering HTTP over TCP was a simplification, because just about everyone 
does that, and still does it for whatever reason.  However it was a 
simplification that ultimately caused a lot more cost than they 
realised, because it led to further layering, and further unreliability.


The original assumptions can be challenged.  If one goes to pure 
request-respose, then the whole lot can be done over datagrams (UDP). 
Once that is done properly, the protocol can move to 4 packets startup, 
then cached 2 packets mode.  The improvement in reliability is a gift.


This is possible, but you have to think outside the box, discard the 
obsession of layering and the mindtrap of reliable TCP.  I've done it, 
so I know it's possible.  Fast, and reliable, too.  Lynn as well, it 
seems.  James points out the architectural secret, that security has to 
be baked into the app, any security below the app is unreliable.





Look at the tlsnextprotonec IETF draft, the Google involvement in SPDY,


SPDY only takes the low-hanging fruit, IIRC.  Very cautious, very 
conservative, hardly seems worth the effort to me.



and perhaps this message as a jumping-off point for both:
http://web.archiveorange.com/archive/v/c2Jaqz6aELyC8Ec4SrLY

I was happy to see that the interest is in piggy-backing, not in changing
SSL/TLS.



If you're content with slow, stick with TLS :)  Fast starts with a clean 
sheet of paper.  It is of course a complete rewrite, but IMHO the work 
effort is less than working with layered mistakes of the past.




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is determinism a good idea? WAS: questions about RNGs and FIPS 140

2010-08-26 Thread Thierry Moreau

travis+ml-cryptogra...@subspacefield.org wrote:

Hey all,


I also wanted to double-check these answers before I included them:


3) Is determinism a good idea?
See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
regulations require non-determinism for obvious reasons.


Do those sound right?



I guess the more productive question is Since determinism requires a 
PRNG algorithm of some sort, which PRNG properties are needed in a given 
usage context?


In all cases, the PRNG relies on a true random source for seeding.


You refer to IT security clients (SSL fiasco), IT security servers 
(virtualization), and lottery/gaming systems. In IT security nowadays 
large PRNG periods and crypto-strength PRNG algorithm are the norm. As I 
understand the state of the art in lottery/gaming industry (incl. 
standards), it is an accepted practice to use short period (by IT 
security standards) PRNG combined with a form of continuous entropy 
collection: background exercise of the PRNG.


I think the SSL fiasco root cause analysis would remind us of criteria 
that are nowadays well addressed in the IT security sector (assuming 
minimal peer review of the design and implementation).



In a security analysis, you watch for data leaks, either in the source 
of truly unpredictable events, or the present/past PRNG state for the 
deterministic components of your design. If you already need data leak 
protection for private or secret keys, your system design may already 
have the required protections for the PRNG state (except that the PRNG 
state is both long-term -- as a long-term private key or long-term 
symmetric authentication key -- and updated in the normal system 
operations -- as session keys).



So, there is no simple answer. I guess every designs facing actual 
operational demands rely on some determinism because a sudden surge in 
secret random data usage is hard to fulfill otherwise.



Forgive me to remind the PUDEC (Practical Use of Dice for Entropy 
Collection) which mates well with a server system design using PRNG 
determinism after installation (or periodic operator-assisted 
maintenance). This project is still active. See 
http://www.connotech.com/doc_pudec_descr.html . You may see this as a 
bias in my opinions, but I don't see any benefits in misrepresenting 
relevant facts and analyzes.



Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread travis+ml-cryptography
On Thu, Aug 26, 2010 at 06:25:55AM -0400, Jerry Leichter wrote:
 [F]IPS doesn't tell you how to *seed* your deterministic generator.  In  
 effect, a FIPS-compliant generator has the property that if you start it 
 with an unpredictable seed, it will produce unpredictable values.   

That brings up an interesting question... if you have a source of
unpredictable values in the first place, why use a CSPRNG? ;-)

Actually, I know I'm being snarky; I'm aware that they're handy for
stretching your random bits, if you don't have enough for the task.

I suppose some people feel they're also handy for whitening them, so
that if they're not entirely random, the structure isn't completely
obvious from the output alone, but I think that's probably a separate
property that needs to be evaluated independent of the others.

Last I checked Linux /dev/{u,}random uses SHA-1 hash over the pool,
which suggests they had this in mind.  However, it also makes using it
very slow for wiping disks or any other high-bandwidth tasks, at least
when compared to something like Yarrow.

I heard from a colleague that /dev/urandom exists on Android, but
/dev/random does not.  Our best guess is that it's the same as the
standard Linux /dev/urandom, but we're not really sure.  Presumably
they dumped /dev/random because there just weren't enough sources of
unpredicability on that platform.  I'd like to hear from anyone who
knows details.

Also, please do check out the links about RNGs on the aformentioned
page.  Seth Hardy's /dev/erandom looks very interesting, and has
languished in relative obscurity for nearly a decade.

I'll take the rest of my comments to this list:
http://lists.bitrot.info/mailman/listinfo/rng
-- 
It asked me for my race, so I wrote in human. -- The Beastie Boys
My emails do not have attachments; it's a digital signature that your mail
program doesn't understand. | http://www.subspacefield.org/~travis/ 
If you are a spammer, please email j...@subspacefield.org to get blacklisted.


pgp9yzKJ9OT7R.pgp
Description: PGP signature


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Alexander Klimov
On Wed, 25 Aug 2010 travis+ml-cryptogra...@subspacefield.org wrote:
 No, because FIPS 140-2 does not allow TRNGs (what they call 
 non-deterministic).
 I couldn't tell if FIPS 140-1 allowed it, but FIPS 140-2 supersedes FIPS 
 140-1.
 I assume they don't allow non-determinism because it makes the system harder
 to test/certify, not because it's less secure.

I guess you misinterpret it. In no place 140-2 does not allow
TRNG.  It says that nondeterministic RNGs should be used
*only* for IVs or to seed deterministic RNGs:

http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf:

  Until such time as an Approved nondeterministic RNG standard
  exists, nondeterministic RNGs approved for use in classified
  applications may be used for key generation or to seed
  Approved deterministic RNGs used in key generation.
  Commercially available nondeterministic RNGs may be used for
  the purpose of generating seeds for Approved deterministic
  RNGs.  Nondeterministic RNGs shall comply with all applicable
  RNG requirements of this standard.

  An Approved RNG shall be used for the generation of
  cryptographic keys used by an Approved security function.  The
  output from a non-Approved RNG may be used 1) as input (e.g.,
  seed, and seed key) to an Approved deterministic RNG or 2) to
  generate initialization vectors (IVs) for Approved security
  function(s).  The seed and seed key shall not have the same
  value.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-26 Thread Florian Weimer
* James A. Donald:

 Every time you layer one communication protocol on top of another, you
 get another round trip.

In this generality, this is not true at all.  You're confusing
handshakes with protocol layering.  You can do the latter without the
former.  For example, DNS uses UDP without introducing additional
round trips because there is no explicit handshake.  Lack of handshake
generally makes error recovery quite complex once there are multiple
protocol versions you need to support, but handshaking is *not* a
consequence of layering.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Perry E. Metzger
On Thu, 26 Aug 2010 08:14:26 -0700
travis+ml-cryptogra...@subspacefield.org wrote:
 On Thu, Aug 26, 2010 at 06:25:55AM -0400, Jerry Leichter wrote:
  [F]IPS doesn't tell you how to *seed* your deterministic
  generator.  In effect, a FIPS-compliant generator has the
  property that if you start it with an unpredictable seed, it will
  produce unpredictable values.

 That brings up an interesting question... if you have a source of
 unpredictable values in the first place, why use a CSPRNG? ;-)

The rationale is clear, but I'll explain it again.

Say you are deploying a small security device into the field.

It is trivial to validate that an AES or SHA256 implementation on the
device is working correctly and to generate a seed in the factory to
place on the device to give it an operational lifetime of good
enough random numbers.

It is difficult to validate that a hardware RNG is working
correctly. How do you know the bits being put off aren't skewed
somehow by a manufacturing defect? How do you know that damage in the
field won't cause the RNG to become less random?

It is therefore both cheaper and far safer to use a deterministic
algorithm on the field deployable unit coupled with a high quality
seed from a source used only at the factory that you can spend time,
effort and money validating properly.

This same principle applies to things like virtual machines where it
is difficult to know that your hardware is giving you what you expect
but trivial to install a known-good seed at VM creation time.

I would have thought by now that this principle was widely understood.


Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Anne Lynn Wheeler

On 08/26/2010 06:38 AM, d...@geer.org wrote:

While I am *not* arguing that point, per se, if having a
better solution would require, or would have required, no
more investment than the accumulated profits in the sale
of SSL domain name certs, we could have solved this by now.


the profit from sale of SSL domain name certs had profit motivation pretty much
unrelated to the overall costs to the infrastructure ... and so there was
an extremely strong champion.

simply enhancing DNS and doing real-time trusted public key distribution
thru a trusted domain name infrastructure ... was all cost with no champion
with strong profit motivation.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Paul Wouters

On Thu, 26 Aug 2010, d...@geer.org wrote:


 as previously mentioned, somewhere back behind everything else ... there
 is strong financial motivation in the sale of the SSL domain name digital
 certificates.


While I am *not* arguing that point, per se, if having a
better solution would require, or would have required, no
more investment than the accumulated profits in the sale
of SSL domain name certs, we could have solved this by now.


Currently, the IETF keyassure WG is working on specifying how to use DNS(SEC)
to put the certs in the DNS to avoid the entire CA authentication.

It seems to be deciding on certs (not raw keys/hashes) to simplify and re-use
the existing TLS based implementations (eg HTTPS)

Paul

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-26 Thread Anne Lynn Wheeler

On 08/25/2010 10:40 PM, James A. Donald wrote:

This is inherent in the layering approach - inherent in our current crypto 
architecture.


one of the things ran into the (ISO chartered) ANSI X3S3.3 (responsible for 
standards
related to OSI level3  level4) meetings with regard to standardization of HSP 
(high
speed protocol) ... was that ISO had policy that it wouldn't do standardization 
on
things that violated OSI model.

HSP violated OSI model by (and was turned down by X3S3.3)

1) went directly from level 4/5 interface to the MAC interface (bypassing
OSI level 3/4 interface)

2) supported internetworking ... which doesn't exist in OSI model ...
would set in non-existing layer between level3  level4

3) went directly to MAC interface ... which doesn't exist in OSI mdoel ...
something that sits approx. in the middle of layer3 (above link layer
and includes some amount of network layer).

In the IETF meetings at the time of original SSL/TLS ... my view was that
ipsec wasn't gaining tranction because it required replacing parts of
tcp/ip kernel stack (upgrading all the kernels in the world was much more
expensive then than it is now). That year two things side-stepped the
ipsec upfront kernel stack problem

* SSL ... which could be deployed as part of the application w/o
requiring changes to existing infrastructure

* VPN ... introduced in gateway sesssion at fall94 IETF meeting. This
was implemented in gateway routers w/o requiring any changes to existing
endpoints. My perception was that it upset the ipsec until they started
referring to VPN as lightweight ipsec (but that opened things for
ipsec to be called heavyweight ipsec). There was a problem with two
classes of router/gateway vendors ... those with processors that
could handle the crypto load and those that had processors that
could handle the crypto load. One of the vendors that couldn't
handle the crypto load went into standards stalling mode and also
a month after the IETF meeting announced a VPN product that
involved adding hardware link encryptors (which would then
required dedicated links between the two locations as opposed
to tunneling thru the internet.



I would contend that various reasons why we are where we are
... include solutions that have champions with profit motivation
as well as things like ease of introduction ... and issues with
being able to have incremental deployments with minimum disruption
to existing facilities (like browser application based solution
w/o requiring any changes to established DNS operation).

On the other hand ... when we were brought in to consult with
the small client/server startup that wanted to do payment
transactions (and had also invented SSL) ... I could mandate
multiple A-record support (basically alternative path mechanism)
for the webserver to payment gateway TCP/SSL connections. However,
it took another year to get their browser to support multiple-A
record (even when supplying them with example code from TAHOE
4.3 distribution) ... they started out telling me that multiple-A
record technique was too advanced.

An early example requirement was one of the first large
adopters/deployments for e-commerce server, advertized on national
sunday football and was expecting big e-commerce business during
sunday afternoon halftime. Their e-commerce webserver had
redundant links to two different ISPs ... however one
of the ISPs had habit of taking equipment down during the
day on sunday for maintenance (w/o multiple-A record support,
there was large probability that significant percentage of
browsers wouldn't be able to connect to the server on
some sunday halftime).

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Eric Murray
On Thu, Aug 26, 2010 at 12:13:06PM -0400, Perry E. Metzger wrote:
 It is difficult to validate that a hardware RNG is working
 correctly. How do you know the bits being put off aren't skewed
 somehow by a manufacturing defect? How do you know that damage in the
 field won't cause the RNG to become less random?

FIPS 140-1 did allow non-deterministic HW RNGs.  If you used one
then you had to run a boot-time self-test which, while not even close to an
exhaustive RNG test, would hopefully detect a HW RNG that had failed.


Eric

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread dj


 3) Is determinism a good idea?
 See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
 regulations require non-determinism for obvious reasons.


The Nevada rules don't convincingly demand non determinism. They do say
things that probably unintentionally exclude non determinism.

4. The random number generator and random selection process must be
impervious to influences from outside the device, including, but not
limited to, electro-magnetic interference, electro-static interference,
and radio frequency interference. A gaming device must use
appropriate communication protocols to protect the random number generator
and random selection process from influence by associated equipment which
is conducting data communications with the gaming device.
(Adopted: 9/89. Amended: 11/05; 11/17/05.)


An impossible requirement for a TRNG based on physical processes. This
requirement pretty much demands determinism and in practice is untestable.

Some definitions..

23. “Randomness” is the observed unpredictability and absence of pattern
in a set of elements or events that have definite probabilities of
occurrence.

 and

20. “Random Number Generator” is a hardware, software, or combination
hardware and software device for generating number values that exhibit
characteristics of randomness.

Definitions that both a TRNG and a PRNG can meet. They don't get down to
the nitty gritty of what the observer might know, like the internal state
of a PRNG, that would impact whether the data has 'observed
upredictability'.

14.040 Minimum standards for gaming devices..
[]
2. Must use a random selection process to determine the game outcome of
each play of a game. The random selection process must meet 95 percent
confidence limits using a standard chi-squared test for goodness of fit.
(a) Each possible permutation or combination of game elements which
produce winning or losing game outcomes must be available for random
selection at the initiation of each play.
(b) For gaming devices that are representative of live gambling games, the
mathematical probability of a symbol or other element appearing in a game
outcome must be equal to the mathematical probability of that symbol or
element occurring in the live gambling game. For other gaming devices, the
mathematical probability of a symbol appearing in a position in any game
outcome must be constant. (c) The selection process must not produce
detectable patterns of game elements or detectable dependency upon any
previous game outcome, the amount wagered, or upon the style or method of
play.


Again, a PRNG would meet these requirements. The only specific test
proposed is the Chi-square GOF.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Thierry Moreau

Nicolas Williams wrote:

On Thu, Aug 26, 2010 at 06:25:55AM -0400, Jerry Leichter wrote:

On Aug 25, 2010, at 4:37 PM,
travis+ml-cryptogra...@subspacefield.org wrote:

I also wanted to double-check these answers before I included them:

1) Is Linux /dev/{u,}random FIPS 140 certified?
No, because FIPS 140-2 does not allow TRNGs (what they call non-
deterministic).  I couldn't tell if FIPS 140-1 allowed it, but
FIPS 140-2 supersedes FIPS 140-1.  I assume they don't allow non-
determinism because it makes the system harder to test/certify,
not because it's less secure.

No one has figured out a way to certify, or even really describe in
a way that could be certified, a non-deterministic generator.


Would it be possible to combine a FIPS 140-2 PRNG with a TRNG such that
testing and certification could be feasible?

I'm thinking of a system where a deterministic (seeded) RNG and
non-deterministic RNG are used to generate a seed for a deterministic
RNG, which is then used for the remained of the system's operation until
next boot or next re-seed.  That is, the seed for the run-time PRNG
would be a safe combination (say, XOR) of the outputs of a FIPS 140-2
PRNG and non-certifiable TNG.

factory_prng = new PRNG(factory_seed, sequence_number, datetime);
trng = new TRNG(device_path);
runtime_prng = new PRNG(factory_prng.gen(seed_size) ^ trng.gen(seed_size), 0, 
0);

One could then test and certify the deterministic RNG and show that the
non-deterministic RNG cannot destroy the security of the system (thus
the non-deterministic RNG would not require testing, much less
certification).

To me it seems obvious that the TRNG in the above scheme cannot
negatively affect the security of the system (given a sufficiently large
seed anyways).

Nico


Such implementations may be *certified* but this mode of CSPRNG seeding 
is unlikely to get *NIST*approved*. Cryptographic systems are 
*certified* with by-the-seat-of-the-pant CSPRNG seeding strategies (I 
guess) since crypto systems *are* being certified.


The tough part is to describe something with some hope of acquiring the 
*NIST*approved* status at some point. The above proposal merely shifts 
the difficulty to the TRNG. Practical Use of Dice for Entropy Collection 
is unique because the unpredictable process (shuffling dice) has clear 
and convincing statistical properties.


- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Overclocking TLS/SSL (was: towards https everywhere and strict transport security)

2010-08-26 Thread =JeffH

Peter Gutmann pgut...@cs.auckland.ac.nz asked..

 Has anyone published any figures for this, CPU speed vs. network latency vs.
 delay for crypto and the network?

there's this (by Adam Langley)..

Overclocking SSL
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

..but it doesn't appear to have (yet) the experimental results you're curious 
about.


=JeffH

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Chris Palmer
Richard Salz writes:

 A really knowledgeable net-head told me the other day that the problem
 with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
 are now more prohibitive than the crypto costs.  I was quite surprised to
 hear this; he was stunned to find it out.

Cryptographic operations are measured in cycles (i.e. nanoseconds now);
network operations are measured in milliseconds. That should not be a
stunning surprise.

What is neither stunning nor surprising, but continually sad, is that web
developers don't measure anything. Predictably, web app performance is
unnecessarily terrible.

I once asked some developers why they couldn't use HTTPS. Performance! was
the cry.

Ok, I said. What is your performance target, and by how much does HTTPS
make you miss it? Maybe we can optimize something so you can afford HTTPS
again.

As fast as possible!!! was the response.

When I pointed out that their app sent AJAX requests and responses that were
tens or even hundreds of KB every couple seconds, and that as a result their
app was barely usable outside their LAN, I was met with blank stares.

Did they use HTTP persistent connections, TLS session resumption, text
content compression, maximize HTTP caching, ...? I think you can guess. :)

Efforts like SPDY are the natural progression of organizations like Google
*WHO HAVE ALREADY OPTIMIZED EVERYTHING ELSE*. Until you've optimized the
content and application layers, worrying about the transport layers makes no
sense. A bloated app will still be slow when transported over SPDY.

Developers are already under the dangerous misapprehension that TLS is too
expensive. When they hear security experts and cryptographers mistakenly
agree, the idea will stick in their minds forever; we will have failed.

The problem comes from insufficiently broad understanding: the sysadmins
fiddle their kernel tuning knobs, the security people don't understand how
applications work, and the developers call malloc 5,000 times and perform
2,500 I/O ops just to print Hello, World. The resulting software is
unsafe, slow, and too expensive.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Eric Murray
On Thu, Aug 26, 2010 at 11:21:35AM -0500, Nicolas Williams wrote:
 Would it be possible to combine a FIPS 140-2 PRNG with a TRNG such that
 testing and certification could be feasible?

Yes.  (assuming you mean FIPS certification).
Use the TRNG to seed the approved PRNG implementation.


 I'm thinking of a system where a deterministic (seeded) RNG and
 non-deterministic RNG are used to generate a seed for a deterministic
 RNG, which is then used for the remained of the system's operation until
 next boot or next re-seed.  That is, the seed for the run-time PRNG
 would be a safe combination (say, XOR) of the outputs of a FIPS 140-2
 PRNG and non-certifiable TNG.

That won't pass FIPS.  It's reasonable from a security standpoint,
(although I would use a hash instead of an XOR), but it's not FIPS 140
certifiable.

Since FIPS can't reasonably test the TRNG output, it can't
be part of the output.  FIPS 140 is about guaranteeing a certain 
level of security, not maximizing security.

Eric

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com