> By caching default constructors used in > `java.security.Provider::newInstanceUtil` in a `ClassValue`, we can reduce > the overhead of allocating instances in a variety of places, e.g., > `MessageDigest::getInstance`, without compromising thread-safety or security. > > On the provided microbenchmark `MessageDigest.getInstance(digesterName)` > improves substantially for any `digesterName` - around -90ns/op and -120B/op: > Benchmark > (digesterName) Mode Cnt Score Error Units > GetMessageDigest.getInstance > md5 avgt 30 293.929 ± 11.294 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > md5 avgt 30 424.028 ± 0.003 B/op > GetMessageDigest.getInstance > SHA-1 avgt 30 322.928 ± 16.503 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > SHA-1 avgt 30 688.039 ± 0.003 B/op > GetMessageDigest.getInstance > SHA-256 avgt 30 338.140 ± 13.902 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > SHA-256 avgt 30 640.037 ± 0.002 B/op > GetMessageDigest.getInstanceWithProvider > md5 avgt 30 312.066 ± 12.805 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > md5 avgt 30 424.029 ± 0.003 B/op > GetMessageDigest.getInstanceWithProvider > SHA-1 avgt 30 345.777 ± 16.669 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > SHA-1 avgt 30 688.040 ± 0.003 B/op > GetMessageDigest.getInstanceWithProvider > SHA-256 avgt 30 371.134 ± 18.485 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > SHA-256 avgt 30 640.039 ± 0.004 B/op > Patch: > Benchmark > (digesterName) Mode Cnt Score Error Units > GetMessageDigest.getInstance > md5 avgt 30 210.629 ± 6.598 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > md5 avgt 30 304.021 ± 0.002 B/op > GetMessageDigest.getInstance > SHA-1 avgt 30 229.161 ± 8.158 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > SHA-1 avgt 30 568.030 ± 0.002 B/op > GetMessageDigest.getInstance > SHA-256 avgt 30 260.013 ± 15.032 ns/op > GetMessageDigest.getInstance:·gc.alloc.rate.norm > SHA-256 avgt 30 520.030 ± 0.002 B/op > GetMessageDigest.getInstanceWithProvider > md5 avgt 30 231.928 ± 10.455 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > md5 avgt 30 304.020 ± 0.002 B/op > GetMessageDigest.getInstanceWithProvider > SHA-1 avgt 30 247.178 ± 11.209 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > SHA-1 avgt 30 568.029 ± 0.002 B/op > GetMessageDigest.getInstanceWithProvider > SHA-256 avgt 30 265.625 ± 10.465 ns/op > GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm > SHA-256 avgt 30 520.030 ± 0.003 B/op > > See: > https://cl4es.github.io/2021/01/04/Investigating-MD5-Overheads.html#reflection-overheads > for context.
Claes Redestad has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains 10 additional commits since the last revision: - Remove constant factor in hashCode - Merge branch 'master' into provider_concache - Address review comments from @valeriep - Add cloneInstance baseline micro - Cache constructor in the Provider.Service instead of a ClassValue. Fix inefficient synchronization in ProviderConfig. Store EngineDescriptor in Service instead of looking it up every time. - NSPE - Add provider-based micro, reduce digesters checked by default - Add GetMessageDigests micro - Missing static - 8259065: java.security.Provider should cache default constructors ------------- Changes: - all: https://git.openjdk.java.net/jdk/pull/1933/files - new: https://git.openjdk.java.net/jdk/pull/1933/files/11b004c8..5a4dbe43 Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk&pr=1933&range=04 - incr: https://webrevs.openjdk.java.net/?repo=jdk&pr=1933&range=03-04 Stats: 1087 lines in 190 files changed: 584 ins; 201 del; 302 mod Patch: https://git.openjdk.java.net/jdk/pull/1933.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/1933/head:pull/1933 PR: https://git.openjdk.java.net/jdk/pull/1933
