Hi Brad,

I am glad that you came across all the threads on this topic. I agree with your evaluation of my comments from an early thread. I have gone through security providers infrastructure multiple times since then and I can see where my comments were incorrect. The confusion was mostly based around mixing SecureRandom.generateSeed () with SecureRandom.nextBytes () and their underlying implementations for different providers.

Now to answer inline...

On 12/17/2014 02:36 AM, Bradford Wetmore wrote:
Various comments for this thread from June/July/November/December.

Some of the comments I'm responding to may already be better understood than when they were originally written.

Peter wrote in response to a suggestion to use /dev/random:

Although the approach would cause some more classes to load, no
arbitrary providers should be initialized.

I think this is waht you get when you set
"java.util.secureRandomSeed" system property to "true". TLR uses
java.security.SecureRandom.getSeed(8) in this case.

For the "no arbitrary provider" part, that may not be quite correct. getSeed() creates/pulls from the default SecureRandom impl (i.e. new SecureRandom().generateSeed()), so it pulls in the Security Provider mechanism to determine the most preferred implementation, which could initialize additional higher-priority providers until an instance of SecureRandom is found. For example, ucrypto on Solaris doesn't have a SecureRandom impl, so it would then fall back to PKCS11.

As has been pointed out, the various Oracle SecureRandom implementations and their preference order are a twisty maze of passages, somewhat but not exactly alike. (With apologies to the "Colossal Cave.") The default preference order is:

Solaris (sparc/sparcv9/x86/x64)
    "PKCS11" - "SunPKCS11"
    "NativePRNG" - "Sun"
    "SHA1PRNG" - "Sun"
    "NativePRNGBlocking" - "Sun"
    "NativePRNGNonBlocking" - "Sun"

Linux (x86/x64)/MacOS
    "NativePRNG" - "Sun"
    "SHA1PRNG" - "Sun"
    "NativePRNGBlocking" - "Sun"
    "NativePRNGNonBlocking" - "Sun"

Windows (x86/x64)
    "SHA1PRNG" - "Sun"
    "Windows-PRNG" - "SunMSCAPI"

Here's a few impl details for seeding calls.

PKCS11:
-------
generateSeed() routes to engineNextBytes(), which goes to the underlying PKCS11.

NativePRNG:  (Unix-only)
-----------
generateSeed() by default routes to /dev/random, unless the System Entropy Gathering Device (EGD) (set via a Security/System property) routes to something else. (FYI: nextBytes() uses /dev/urandom.)

NativePRNG$BLOCKING:  (Unix-only)
--------------------
generateSeed() always routes to /dev/random. (FYI: nextBytes() uses /dev/random.)

NativePRNG$NONBLOCKING:  (Unix-only)
-----------------------
generateSeed() always routes to /dev/urandom. (FYI: nextBytes() uses /dev/urandom.)


SHA1PRNG:
---------
generateSeed() depends on the value of the EGD:

default EGD is:  "/dev/random"

    Note: if string "/dev/urandom" is set, urandom is used instead.

    Unix:  generateSeed() routes to /dev/random
           (NativeSeedGenerator: pure java)

    Win:   generateSeed() routes to CryptGenRandom
           (NativeSeedGenerator + libjava native)

non-default (not "/dev/random"/"/dev/urandom"):  "URL" is specified
    UNIX/Win:  routes to URL

If both above fail:
    falls back to ThreadedSeedGenerator (Pure Java)


Windows-PRNG
------------
generateSeed() routes to mscapi.PRNG/CryptGenRandom
    Note uses libmscapi, not libjava.

That's right. Different defaults on different platforms and a possibility to configure preferential custom providers makes the choices for implementation of SecureRandom.getSeed() static method (which uses 1st SecureRandom provider in the providers list) quite diverse. The problem I see here is different default behaviour depending on platform. A user can make a choice which SecureRandom algorithm the application code uses by explicitly requesting it (with SecureRandom.getInstance(algorithm)), but she can't choose the algorithm when she decides to use SecureRandom for initial seeding of TLR/SplittableRandom. By default on Unix you get a /dev/random kind of implementation for generateSeed() which is blocking on Linux. On Linux one would probably want to use NativePRNGNonBlocking which uses /dev/urandom for generateSeed().

One way to solve this is to extend the meaning of java.util.secureRandomSeed system property - besides "true" which would choose the 1st provider, one could specify the algorithm name. For example, on Unix one would choose: java.util.secureRandomSeed=NativePRNGNonBlocking to get a /dev/urandom based initial seed for TLR/SplittableRandom.

There's also a possibility to hard-code an explicit lookup for particular algorithms and use the one with highest preference that is available with a fall-back on 1st (default) provider. For initial seeding of TLR/SplittableRandom, security is not important, but initialization latency is, so the preference order for choosing SecureRandom algorithm is different for TLR/SplittableRandom seeding than for general application needs.

Considering SecureRandom as an option for seeding TLR/SplittableRandom is a consequence of having no other good alternatives in JDK for initial seeding of non-secure PRNGs. Current mechanism which only uses System.nanoTime() and System.currentTimeMillis() might not be good-enough in certain situations (like spawning lots of VMs at the same time). We need some more entropy.



Peter wrote:
The most problematic one is the default on Windows platform (the
platform that does not have the "/dev/urandom" special file and would
be used as a fall-back by your proposal) -
sun.security.provider.SecureRandom. This one seeds itself by
constructing an instance of itself with the result returned from
SeedGenerator.getSystemEntropy() method. This method, among other
things, uses networking code to gather system entropy:

SeedGenerator.getSystemEntropy() that includes the Network interfaces is only called when you need to seed the SHA1PRNG internal seeder in order to generate nextBytes(). generateSeed() doesn't trigger that internal seeder initialization.

That's right. My mistake in reading the code.


So as you can see above, SecureRandom.generateSeed(int) is really variable in what you might end up with, and how much cruft comes with it. For sure, the provider mechanism will be dragged in which is fairly substantial.


Peter wrote:
------------
So by default yes, plain NativePRNG (the default on
UNIX-es) is using /dev/urandom for nextBytes(), but this can be
changed by defining java.security.egd or securerandom.source system
property.

EGD really only affects where Seed byes are obtained from, IIRC, nextBytes() is not generally affected by this value. It does tweak which implementation is most preferred within the Sun provider, but the majority of the effect is in Native/SHA1PRNG choice of generateSeed.

True.


The original suggestion back in June:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2014-June/027389.html http://cr.openjdk.java.net/~plevart/jdk9-dev/TLR_SR_SeedGenerator/webrev.01/

for directly calling into NativeSeedGenerator makes more sense if you want to avoid duplicating existing code and creating a new native libraries as in the current proposal (webrev.03). Your data shows that this approach pulls in a much smaller subset of classes than using the full SecureRandom.getInstance().generateSeed() API. I've gone through the threads a couple times now: somehow I've missed the rationale for why you're moving away from this (.01) for webrev.03.

There are several reasons:

- I got an impression that hacking on and publicly exposing package-private SeedGenerator API is not a desirable approach from viewpoint of further maintainability and inter-dependencies, especially now that modules are coming to JDK9. - As said, the preference for TLR/SplittableRandom initial seeding is non-blocking and small initialization latency, not security - the default SeedGenerator is blocking on Linux, so I had to expose a special method just to return a temporary non-blocking instance which is not used by sun.security.provider internally, just by TLR/SplittableRandom. - As sun.security.provider and TLR/SplittableRandom are using different types of SeedGenerator, I thought why not making a separate internal API just for TLR/SplittableRandom use. - There is a desire to access this functionality also from external user code (for example from stand-alone builds of java.util.concurrent utilities). This could be provided given this API is moved to a globally exported package (see below).

Separate API also allowed me to use ADVAPI32!RtlGenRandom function instead of Crypto API's CryptGenRandom on Windows which further reduces initialization latency and footprint (I think this could be used for NativeSeedGenerator too as other functions of SUN provider don't use Crypto API).


To the actual proposal:

http://cr.openjdk.java.net/~plevart/jdk9-dev/SystemRandom/webrev.03/

Overall, I'm ok with what's proposed. This is more straightforward to parse/understand than trying to adjust NativeSeedGenerator to create/call directly (e.g. UNIX: new NativeSeedGenerator("/dev/urandom") or Windows: new NativeSeedGenerator()). But I'd still like to understand why you moved away from this.

One concern is that you're duplicating native libraries in java.base, and it would be the third JDK library overall with this type of call. There's one in libjava (for java.base/WinCAPISeedGenerator for sun.security.provider.NativeSeedGenerator) and sunmscapi (for jdk.crypto.mscapi/SunMSCAPI/sun.security.mscapi). Would it work to tweak the WinCAPISeedGenerator so you don't have to create a new dll for java.base?

The SystemRandom JNI bindings for Windows are located in:

    java.base/windows/native/libjava/SystemRandomImpl_md.c

...so as I understand they are also part of libjava. No new DLL here. Is this going to change with modules? Is libjava going to split? In any case the bindings could be included in some existing DLL in the module where they are deployed (most probably java.base).


What are the fallbacks for SystemRandomImpl if /dev/urandom or the rtlGenRandomFN/CryptGenRandom aren't available? Is that something you'll bake into TLR or will you do it here?

I think it's better to leave it to consumers (TLR/SplittableRandom) as they know what's good-enough for them. The API allows for arbitrary number of bytes to be generated and I don't have an easy means of generating more than 8 "random" bytes just from System.nanoTime() and System.currentTimeMillis() short of using SecureRandom as a fall-back.

The problem is also how to make access to this functionality for different consumers that are located in different packages (java.util, java.util.concurrent) and make it somehow usable also for external access. There is a desire to use this also from stand-alone builds of java.util.concurrent utilities. That's why my initial approach for SystemRandom used a public API in java.util.

The approach used with sun.misc.Unsafe is probably not going to work for user code in JDK9 with modules, as sun.misc will not be globally exported. Are any non J2SE packages going to be globally exported? I see jdk and jdk.net are already mentioned as such globally exported packages in modules.xml...


Having TLR seed the other clients is ok with me, the APIs make it clear that this isn't a strongly secure mechanism.

(Also, at some point we might reconsider our cowardice about not
improving the internal java.util.Random algorithm. j.u.Random is
much more commonly used, and does not fare well on quality tests.
On the other hand, the more that users instead choose to use
SR or TLR, the better.)

The main problem is code (not just JDK test code) that hardwires
expected Random.next* output under given seeds. Which might be
enough reason to leave it alone.
Do any CCC members have an opinion?

I'm *NOT* a CCC member (IANACCCM?). However, the current javadocs are very specific on several points. The big ones for me: "If two instances of Random are created with the same seed...<deleted>...they will generate and return identical sequences of numbers". It doesn't specify whether these are two instances are in the same VM or are across VMs/vendors, but the wording: "Java implementations must use all the algorithms shown here for the class Random, for the sake of absolute portability of Java code" which makes me think it's the latter. That is, you should not change the algorithm. That's my $0.02.

The following is just one last thing to keep in mind. If you call generateSeed() on Linux (e.g. in the current code for TLR/SplittableRandom: java.util.secureRandomSeed calls to getSeed()), you could block. We still receive "hang" reports because apps/libraries insist on SHA1PRNG which uses 20 bytes of /dev/random to seed the seeder. We especially see this on systems that simultaneously start multiple VMs and drain the /dev/random pool quickly. Another 8 bytes of TLR/SplittableRandom could have further impact.

Martin wrote:

https://bugs.openjdk.java.net/browse/JDK-8047769

If you've been following this bug, I've figured why the NativePRNG$* classes are initing and thus opening the /dev/random,urandom. This definitely needs some adjustment.

Something like the following could be used in NativePRNG and URLSeedGenerator:

http://cr.openjdk.java.net/~plevart/misc/FileInputStreamPool/FileInputStreamPool.java


Regards, Peter


Brad

Reply via email to