markrmiller commented on a change in pull request #214:
URL: https://github.com/apache/solr/pull/214#discussion_r672482454
##########
File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java
##########
@@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth,
String debug) {
public SSLTestConfig createSSLTestConfig() {
// even if we know SSL is disabled, always consume the same amount of
randomness
// that way all other test behavior should be consistent even if a user
adds/removes @SuppressSSL
-
- final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999)
<
+ Random random = new Random();
Review comment:
Because we use randoms differently in the benchmark stuff and don't want
to be stuck with Randomized testing randoms.
This change was not intended to stick here though, that was from early
workarounds.
I don't really want the random enforcement / support from the test framework
for a couple reasons, but this is simply something not removed - the problem
being if you used the mincluster and jetty with jetty.testMode=true and did not
launch things via carrot randomized runner, it will *sometimes* detect you dont
have a test randomized context for a thread and fail you - but we are not using
randomized runner or junit. Currently, I work around needing this workaround by
not using the jetty.testMode sys prop path and adding another sys prop hack atm
for where starting the mini cluster is a bit too tied into the test framework
and carrot random reqs.
Java 7 and up has essentially made Random obsolete, so there needs to be
some separate regardless, because we don't use carrot2 junit runners for
benchmarks, but it's also much preferable, faster, improved to avoid Random
entirely and use ThreadLocalRandom and SplittableRandom instead, so i try and
use them in the benchmarks.
##########
File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java
##########
@@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth,
String debug) {
public SSLTestConfig createSSLTestConfig() {
// even if we know SSL is disabled, always consume the same amount of
randomness
// that way all other test behavior should be consistent even if a user
adds/removes @SuppressSSL
-
- final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999)
<
+ Random random = new Random();
Review comment:
Because we use randoms differently in the benchmark stuff and don't want
to be stuck with Randomized testing randoms.
This change was not intended to stick here though, that was from early
workarounds.
I don't really want the random enforcement / support from the test framework
for a couple reasons, but this is simply something not removed - the problem
being if you used the mincluster and jetty with jetty.testMode=true and did not
launch things via carrot randomized runner, it will *sometimes* detect you dont
have a test randomized context for a thread and fail you - but we are not using
randomized runner or junit. Currently, I work around needing this workaround by
not using the jetty.testMode sys prop path and adding another sys prop hack atm
for where starting the mini cluster is a bit too tied into the test framework
and carrot random reqs.
Java 7 and up has essentially made Random obsolete, so there needs to be
some separation regardless, because we don't use carrot2 junit runners for
benchmarks, but it's also much preferable, faster, improved to avoid Random
entirely and use ThreadLocalRandom and SplittableRandom instead, so i try and
use them in the benchmarks.
##########
File path:
solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java
##########
@@ -124,8 +124,18 @@ public Builder(int nodeCount, Path baseDir) {
// By default the MiniSolrCloudCluster being built will randomly (seed
based) decide which collection API strategy
// to use (distributed or Overseer based) and which cluster update
strategy to use (distributed if collection API
// is distributed, but Overseer based or distributed randomly chosen if
Collection API is Overseer based)
- this.useDistributedCollectionConfigSetExecution =
LuceneTestCase.random().nextInt(2) == 0;
- this.useDistributedClusterStateUpdate =
useDistributedCollectionConfigSetExecution ||
LuceneTestCase.random().nextInt(2) == 0;
+
+ Boolean skipDistRandomSetup =
Boolean.getBoolean("solr.tests.skipDistributedConfigAndClusterStateRandomSetup");
Review comment:
This is essentially a get it working hack at the moment. Ideally, it's
simpler to use MiniSolrCluster without having to worry about this
SolrCloudTestCase / carrot random tie in.
You really want a consistent experience when using the class itself. It's
running things via tests that should enable the randomization.
##########
File path: solr/test-framework/build.gradle
##########
@@ -19,10 +19,131 @@ apply plugin: 'java-library'
description = 'Solr Test Framework'
+sourceSets {
+ // Note that just declaring this sourceset creates two configurations.
+ jmh {
+ java.srcDirs = ['src/jmh']
+ }
+}
+
+compileJmhJava {
+ doFirst {
+ options.compilerArgs.remove("-Werror")
+ options.compilerArgs.remove("-proc:none")
+ }
+}
+
+forbiddenApisJmh {
+ bundledSignatures += [
+ 'jdk-unsafe',
+ 'jdk-deprecated',
+ 'jdk-non-portable',
+ ]
+
+ suppressAnnotations += [
+ "**.SuppressForbidden"
+ ]
+}
+
+
+task jmh(type: JavaExec) {
+ dependsOn("jmhClasses")
+ group = "benchmark"
+ main = "org.openjdk.jmh.Main"
+ classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath
+
+ standardOutput(System.out)
+ errorOutput(System.err)
+
+ def include = project.properties.get('include');
+ def exclude = project.properties.get('exclude');
+ def format = project.properties.get('format', 'json');
+ def profilers = project.properties.get('profilers');
+ def jvmArgs = project.properties.get('jvmArgs')
+ def verify = project.properties.get('verify');
+
+ def resultFile = file("build/reports/jmh/result.${format}")
+
+ if (include) {
+ args include
+ }
+ if (exclude) {
+ args '-e', exclude
+ }
+ if (verify != null) {
+ // execute benchmarks with the minimum amount of execution (only to check
if they are working)
+ println "≥≥ Running in verify mode"
+ args '-f', 1
+ args '-wi', 1
+ args '-i', 1
+ }
+ args '-foe', 'true' //fail-on-error
+ args '-v', 'NORMAL' //verbosity [SILENT, NORMAL, EXTRA]
+ if (profilers) {
+ profilers.split(',').each {
+ args '-prof', it
+ }
+ }
+
+ args '-jvmArgsPrepend', '-Xms4g'
+ args '-jvmArgsPrepend', '-Djmh.separateClassLoader=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.is.webapp=false'
+ args '-jvmArgsPrepend', '-Dlog4j2.garbagefreeThreadContextMap=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.enableDirectEncoders=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.enable.threadlocals=true'
+// args '-jvmArgsPrepend', '-XX:ConcGCThreads=2'
+// args '-jvmArgsPrepend', '-XX:ParallelGCThreads=3'
+// args '-jvmArgsPrepend', '-XX:+UseG1GC'
+ args '-jvmArgsPrepend', '-Djetty.insecurerandom=1'
+ args '-jvmArgsPrepend', '-Djava.security.egd=file:/dev/./urandom'
+ args '-jvmArgsPrepend', '-XX:-UseBiasedLocking'
+ args '-jvmArgsPrepend', '-XX:+PerfDisableSharedMem'
+ args '-jvmArgsPrepend', '-XX:+ParallelRefProcEnabled'
+// args '-jvmArgsPrepend', '-XX:MaxGCPauseMillis=250'
+ args '-jvmArgsPrepend', '-Dsolr.log.dir='
+
+ if (jvmArgs) {
+ for (jvmArg in jvmArgs.split(' ')) {
+ args '-jvmArgsPrepend', jvmArg
+ }
+ }
+
+ args '-rf', format
+ args '-rff', resultFile
+
+ doFirst {
+ // println "\nClasspath:" + jmh.classpath.toList()
+ println "\nExecuting JMH with: $args \n"
+
+ args '-jvmArgsPrepend', '-Djava.class.path='+ toPath(getClasspath().files)
+ resultFile.parentFile.mkdirs()
+ }
+
+ doLast {
+ // jvmArgs "java.class.path", toPath(jmh.classpath)
+ }
+
+}
+
+
+private String toPath(Set<File> classpathUnderTest) {
Review comment:
I still need to work out if this is still needed. It was needed because
when running via gradle and using jmh's fork option, the classpath was not
propagated. I have sense simplified the integration (realizing I was jumping
hoops because our build was putting in -proc:none for all java compile tasks)
and I have to double check to make sure this is still necessary.
##########
File path: solr/test-framework/build.gradle
##########
@@ -19,10 +19,131 @@ apply plugin: 'java-library'
description = 'Solr Test Framework'
+sourceSets {
+ // Note that just declaring this sourceset creates two configurations.
+ jmh {
+ java.srcDirs = ['src/jmh']
+ }
+}
+
+compileJmhJava {
+ doFirst {
+ options.compilerArgs.remove("-Werror")
+ options.compilerArgs.remove("-proc:none")
+ }
+}
+
+forbiddenApisJmh {
+ bundledSignatures += [
+ 'jdk-unsafe',
+ 'jdk-deprecated',
+ 'jdk-non-portable',
+ ]
+
+ suppressAnnotations += [
+ "**.SuppressForbidden"
+ ]
+}
+
+
+task jmh(type: JavaExec) {
+ dependsOn("jmhClasses")
+ group = "benchmark"
+ main = "org.openjdk.jmh.Main"
+ classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath
+
+ standardOutput(System.out)
+ errorOutput(System.err)
+
+ def include = project.properties.get('include');
+ def exclude = project.properties.get('exclude');
+ def format = project.properties.get('format', 'json');
+ def profilers = project.properties.get('profilers');
+ def jvmArgs = project.properties.get('jvmArgs')
+ def verify = project.properties.get('verify');
+
+ def resultFile = file("build/reports/jmh/result.${format}")
+
+ if (include) {
+ args include
+ }
+ if (exclude) {
+ args '-e', exclude
+ }
+ if (verify != null) {
+ // execute benchmarks with the minimum amount of execution (only to check
if they are working)
+ println "≥≥ Running in verify mode"
+ args '-f', 1
+ args '-wi', 1
+ args '-i', 1
+ }
+ args '-foe', 'true' //fail-on-error
+ args '-v', 'NORMAL' //verbosity [SILENT, NORMAL, EXTRA]
+ if (profilers) {
+ profilers.split(',').each {
+ args '-prof', it
+ }
+ }
+
+ args '-jvmArgsPrepend', '-Xms4g'
+ args '-jvmArgsPrepend', '-Djmh.separateClassLoader=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.is.webapp=false'
+ args '-jvmArgsPrepend', '-Dlog4j2.garbagefreeThreadContextMap=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.enableDirectEncoders=true'
+ args '-jvmArgsPrepend', '-Dlog4j2.enable.threadlocals=true'
+// args '-jvmArgsPrepend', '-XX:ConcGCThreads=2'
+// args '-jvmArgsPrepend', '-XX:ParallelGCThreads=3'
+// args '-jvmArgsPrepend', '-XX:+UseG1GC'
+ args '-jvmArgsPrepend', '-Djetty.insecurerandom=1'
+ args '-jvmArgsPrepend', '-Djava.security.egd=file:/dev/./urandom'
+ args '-jvmArgsPrepend', '-XX:-UseBiasedLocking'
+ args '-jvmArgsPrepend', '-XX:+PerfDisableSharedMem'
+ args '-jvmArgsPrepend', '-XX:+ParallelRefProcEnabled'
+// args '-jvmArgsPrepend', '-XX:MaxGCPauseMillis=250'
+ args '-jvmArgsPrepend', '-Dsolr.log.dir='
+
+ if (jvmArgs) {
+ for (jvmArg in jvmArgs.split(' ')) {
+ args '-jvmArgsPrepend', jvmArg
+ }
+ }
+
+ args '-rf', format
+ args '-rff', resultFile
+
+ doFirst {
+ // println "\nClasspath:" + jmh.classpath.toList()
+ println "\nExecuting JMH with: $args \n"
+
+ args '-jvmArgsPrepend', '-Djava.class.path='+ toPath(getClasspath().files)
+ resultFile.parentFile.mkdirs()
+ }
+
+ doLast {
+ // jvmArgs "java.class.path", toPath(jmh.classpath)
+ }
+
+}
+
+
+private String toPath(Set<File> classpathUnderTest) {
Review comment:
I still need to work out if this is still needed. It was needed because
when running via gradle and using jmh's fork option, the classpath was not
propagated. I have since simplified the integration (realizing I was jumping
hoops because our build was putting in -proc:none for all java compile tasks)
and I have to double check to make sure this is still necessary.
##########
File path:
solr/core/src/java/org/apache/solr/client/solrj/embedded/JettySolrRunner.java
##########
@@ -313,22 +313,23 @@ private void init(int port) {
if (config.onlyHttp1) {
connector = new ServerConnector(server, new
HttpConnectionFactory(configuration));
} else {
- connector = new ServerConnector(server, new
HttpConnectionFactory(configuration),
- new HTTP2CServerConnectionFactory(configuration));
+ connector = new ServerConnector(server, new
HttpConnectionFactory(configuration), new
HTTP2CServerConnectionFactory(configuration));
}
}
connector.setReuseAddress(true);
connector.setPort(port);
connector.setHost("127.0.0.1");
connector.setIdleTimeout(THREAD_POOL_MAX_IDLE_TIME_MS);
- connector.setStopTimeout(0);
+
server.setConnectors(new Connector[] {connector});
server.setSessionIdManager(new DefaultSessionIdManager(server, new
Random()));
} else {
HttpConfiguration configuration = new HttpConfiguration();
- ServerConnector connector = new ServerConnector(server, new
HttpConnectionFactory(configuration));
+ ServerConnector connector = new ServerConnector(server, new
HttpConnectionFactory(configuration), new
HTTP2CServerConnectionFactory(configuration));
Review comment:
Because it currently does not work with http2, though I have spun these
fixes into: SOLR-15547
##########
File path: solr/test-framework/src/jmh/org/apache/solr/bench/DocMakerRamGen.java
##########
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bench;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.Validate;
+import org.apache.lucene.util.TestUtil;
+import org.apache.solr.common.SolrInputDocument;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Queue;
+import java.util.Random;
+import java.util.SplittableRandom;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+public class DocMakerRamGen {
+
+ private final static Map<String,Queue<SolrInputDocument>> CACHE = new
ConcurrentHashMap<>();
+
+ private Queue<SolrInputDocument> docs = new ConcurrentLinkedQueue<>();
+
+ private final Map<String, FieldDef> fields = new HashMap<>();
+
+ private static final AtomicInteger ID = new AtomicInteger();
+ private final boolean cacheResults;
+
+ private ExecutorService executorService;
+
+ private SplittableRandom threadRandom;
+
+ public DocMakerRamGen() {
+ this(true);
+ }
+
+ public DocMakerRamGen(boolean cacheResults) {
+ this.cacheResults = cacheResults;
+
+ Long seed = Long.getLong("threadLocalRandomSeed");
+ if (seed == null) {
+ System.setProperty("threadLocalRandomSeed", Long.toString(new
Random().nextLong()));
+ }
+
+ threadRandom = new SplittableRandom(Long.getLong("threadLocalRandomSeed"));
+ }
+
+ public void preGenerateDocs(int numDocs) throws InterruptedException {
+ executorService =
Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
+ if (cacheResults) {
+ docs = CACHE.compute(Integer.toString(hashCode()), (key, value) -> {
+ if (value == null) {
+ for (int i = 0; i < numDocs; i++) {
Review comment:
This is likely a fair bit to do on cleaning up / finalizing this
docmaker - still pulling a bit from elsewhere to it and then I'll do some
cleanup - next update.
##########
File path: solr/test-framework/src/jmh/org/apache/solr/bench/DocMakerRamGen.java
##########
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bench;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.Validate;
+import org.apache.lucene.util.TestUtil;
+import org.apache.solr.common.SolrInputDocument;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Queue;
+import java.util.Random;
+import java.util.SplittableRandom;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+public class DocMakerRamGen {
+
+ private final static Map<String,Queue<SolrInputDocument>> CACHE = new
ConcurrentHashMap<>();
+
+ private Queue<SolrInputDocument> docs = new ConcurrentLinkedQueue<>();
+
+ private final Map<String, FieldDef> fields = new HashMap<>();
+
+ private static final AtomicInteger ID = new AtomicInteger();
+ private final boolean cacheResults;
+
+ private ExecutorService executorService;
+
+ private SplittableRandom threadRandom;
+
+ public DocMakerRamGen() {
+ this(true);
+ }
+
+ public DocMakerRamGen(boolean cacheResults) {
+ this.cacheResults = cacheResults;
+
+ Long seed = Long.getLong("threadLocalRandomSeed");
+ if (seed == null) {
+ System.setProperty("threadLocalRandomSeed", Long.toString(new
Random().nextLong()));
+ }
+
+ threadRandom = new SplittableRandom(Long.getLong("threadLocalRandomSeed"));
+ }
+
+ public void preGenerateDocs(int numDocs) throws InterruptedException {
+ executorService =
Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
+ if (cacheResults) {
+ docs = CACHE.compute(Integer.toString(hashCode()), (key, value) -> {
+ if (value == null) {
+ for (int i = 0; i < numDocs; i++) {
+ executorService.submit(() -> {
+ SolrInputDocument doc = getDocument();
+ docs.add(doc);
+ });
+ }
+ return docs;
+ }
+ for (int i = value.size(); i < numDocs; i++) {
+ executorService.submit(() -> {
+ SolrInputDocument doc = getDocument();
+ value.add(doc);
+ });
+ }
+ return value;
+ });
+ } else {
+ for (int i = 0; i < numDocs; i++) {
+ executorService.submit(() -> {
+ SolrInputDocument doc = getDocument();
+ docs.add(doc);
+ });
+ }
+ }
+
+ executorService.shutdown();
+ boolean result = executorService.awaitTermination(10, TimeUnit.MINUTES);
+ if (!result) {
+ throw new RuntimeException("Timeout waiting for doc adds to finish");
+ }
+ }
+
+ public Iterator<SolrInputDocument> getGeneratedDocsIterator() {
+ return docs.iterator();
+ }
+
+ public SolrInputDocument getDocument() {
+ SolrInputDocument doc = new SolrInputDocument();
+
+ for (Map.Entry<String,FieldDef> entry : fields.entrySet()) {
+ doc.addField(entry.getKey(), getValue(entry.getValue()));
+ }
+
+ return doc;
+ }
+
+ public void addField(String name, FieldDef.FieldDefBuilder builder) {
+ fields.put(name, builder.build());
+ }
+
+ private Object getValue(FieldDef value) {
+ switch (value.getContent()) {
+ case UNIQUE_INT:
+ return ID.incrementAndGet();
+ case INTEGER:
+ if (value.getMaxCardinality() > 0) {
+ long start = value.getCardinalityStart();
+ long seed = nextLong(start, start + value.getMaxCardinality(),
threadRandom);
+ SplittableRandom random = new SplittableRandom(seed);
+ return nextInt(0, Integer.MAX_VALUE, random);
+ }
+
+ return ThreadLocalRandom.current().nextInt(Integer.MAX_VALUE);
+ case ALPHEBETIC:
+ if (value.getNumTokens() > 1) {
+ StringBuilder sb = new StringBuilder(value.getNumTokens() *
(Math.max(value.getLength(),value.getMaxLength()) + 1));
+ for (int i = 0; i < value.getNumTokens(); i++) {
+ if (i > 0) {
+ sb.append(' ');
+ }
+ sb.append(getAlphabeticString(value));
+ }
+ return sb.toString();
+ }
+ return getAlphabeticString(value);
+ case UNICODE:
+ if (value.getNumTokens() > 1) {
+ StringBuilder sb = new StringBuilder(value.getNumTokens() *
(Math.max(value.getLength(),value.getMaxLength()) + 1));
+ for (int i = 0; i < value.getNumTokens(); i++) {
+ if (i > 0) {
+ sb.append(' ');
+ }
+ sb.append(getUnicodeString(value));
+ }
+ return sb.toString();
+ }
+ return getUnicodeString(value);
+ default:
+ throw new UnsupportedOperationException("Unsupported content type
type=" + value.getContent());
+ }
+
+ }
+
+ private String getUnicodeString(FieldDef value) {
+ if (value.getMaxCardinality() > 0) {
+ long start = value.getCardinalityStart();
+ long seed = nextLong(start, start + value.getMaxCardinality(),
threadRandom);
+ SplittableRandom random = new SplittableRandom(seed);
+ if (value.getLength() > -1) {
+ return TestUtil.randomRealisticUnicodeString(new Random(seed),
value.getLength(), value.getLength());
+ } else {
+ return TestUtil.randomRealisticUnicodeString(new Random(seed), 1,
value.getMaxLength());
+ }
+ }
+
+ if (value.getLength() > -1) {
+ return
TestUtil.randomRealisticUnicodeString(ThreadLocalRandom.current(),
value.getLength(), value.getLength());
+ } else {
+ return
TestUtil.randomRealisticUnicodeString(ThreadLocalRandom.current(), 1,
value.getMaxLength());
+ }
+ }
+
+ private String getAlphabeticString(FieldDef value) {
+ if (value.getMaxCardinality() > 0) {
+ long start = value.getCardinalityStart();
+ long seed = nextLong(start, start + value.getMaxCardinality(),
threadRandom);
+ SplittableRandom random = new SplittableRandom(seed);
+ if (value.getLength() > -1) {
+ return RandomStringUtils.random(nextInt(value.getLength(),
value.getLength(), random), 0, 0, true, false, null, new Random(seed));
+ } else {
+ return RandomStringUtils.random(nextInt(1, value.getMaxLength(),
random), 0, 0, true, false, null, new Random(seed));
+ }
+ }
+
+ SplittableRandom threadRandom = new
SplittableRandom(Long.getLong("threadLocalRandomSeed",
ThreadLocalRandom.current().nextLong()));
+ if (value.getLength() > -1) {
+ return RandomStringUtils.random(nextInt(value.getLength(),
value.getLength(), threadRandom), 0, 0, true, false, null,
ThreadLocalRandom.current());
+ } else {
+ return RandomStringUtils.random(nextInt(1, value.getMaxLength(),
threadRandom), 0, 0, true, false, null, ThreadLocalRandom.current());
+ }
+ }
+
+ public enum Content {
+ UNICODE, ALPHEBETIC, INTEGER, UNIQUE_INT
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) return true;
+ if (o == null || getClass() != o.getClass()) return false;
+ DocMakerRamGen that = (DocMakerRamGen) o;
+ return fields.equals(that.fields);
+ }
+
+ @Override
+ public int hashCode() {
+ return Objects.hash(fields);
+ }
+
+ public static int nextInt(final int startInclusive, final int endExclusive,
SplittableRandom random) {
+ Validate.isTrue(endExclusive >= startInclusive,
+ "Start value must be smaller or equal to end value.");
+ Validate.isTrue(startInclusive >= 0, "Both range values must be
non-negative.");
+
+ if (startInclusive == endExclusive) {
+ return startInclusive;
+ }
+
+ return startInclusive + random.nextInt(endExclusive - startInclusive);
+ }
+
+ public static long nextLong(final long startInclusive, final long
endExclusive, SplittableRandom random) {
+ Validate.isTrue(endExclusive >= startInclusive,
+ "Start value must be smaller or equal to end value.");
+ Validate.isTrue(startInclusive >= 0, "Both range values must be
non-negative.");
+
+ if (startInclusive == endExclusive) {
+ return startInclusive;
+ }
+
+ return startInclusive + random.nextLong(endExclusive - startInclusive);
+ }
+
+ public static void main(String[] args) {
Review comment:
Still draft form. This should end up as a unit test for the jmh stuff,
but have not set that ability up in gradle quite yet.
##########
File path: solr/test-framework/src/jmh/org/apache/solr/bench/FieldDef.java
##########
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bench;
+
+import java.util.Objects;
+import java.util.concurrent.ThreadLocalRandom;
+
+public class FieldDef {
Review comment:
Everything still needs javadocs and package level overview file. Don't
want to keep updating them though, so will come when I feel the rest won't get
change pushback.
##########
File path: solr/test-framework/src/jmh/org/apache/solr/bench/FieldDef.java
##########
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bench;
+
+import java.util.Objects;
+import java.util.concurrent.ThreadLocalRandom;
+
+public class FieldDef {
+ private DocMakerRamGen.Content content;
Review comment:
It probably should be. Was not sure I might do something a little more
solid than hashing for preventing regen per iteration on a run.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]