[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209442153 --- Diff: solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java --- @@ -120,6 +121,16 @@ public UpdateRequestProcessor getInstance(SolrQueryRequest req, protected Object mutateValue(Object srcVal) { if (srcVal instanceof CharSequence) { String srcStringVal = srcVal.toString(); + // trim single quotes around date if present --- End diff -- Oh yeah. It's unfortunate such ugly data comes in as such... but it's at least cheap & safe to do this so I guess it's okay. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209442225 --- Diff: solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java --- @@ -159,8 +170,9 @@ public void init(NamedList args) { Collection formatsParam = args.removeConfigArgs(FORMATS_PARAM); if (null != formatsParam) { for (String value : formatsParam) { -DateTimeFormatter formatter = new DateTimeFormatterBuilder().parseCaseInsensitive() - .appendPattern(value).toFormatter(locale).withZone(defaultTimeZone); +DateTimeFormatter formatter = new DateTimeFormatterBuilder().parseLenient().parseCaseInsensitive() --- End diff -- I see you're hard-coding leniency and not making it an option. It's fine, I think. I'm not sure why someone would expressly want it to be strict (would want an error), not for an application of parsing like we see here. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209442197 --- Diff: solr/core/src/test-files/solr/collection1/conf/solrconfig-parsing-update-processor-chains.xml --- @@ -109,6 +109,21 @@ + --- End diff -- Soon there will be no such "date extraction util" so I suggest another name. Hmmm. Perhaps "parse-date-patterns-from-extract-contrib". Yeah it's long but no big deal. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209442210 --- Diff: solr/core/src/test-files/solr/collection1/conf/solrconfig-parsing-update-processor-chains.xml --- @@ -109,6 +109,21 @@ + + + UTC + en + +-MM-dd['T'[HH:mm:ss['.'SSS]['Z' --- End diff -- After working with you on the previous issue, I now know that `'Z'` isn't appropriate if the defaultTimeZone is configurable, since as-such it's a literal that is ignored -- and the defaultTimeZone would take effect which might not be UTC. We know 'z' is right now. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209442336 --- Diff: solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java --- @@ -896,13 +899,125 @@ public void testCascadingParsers() throws Exception { assertTrue(mixedDates.isEmpty()); } - private Date parse(DateTimeFormatter dateTimeFormatter, String dateString) { + // TestsExtractionDateUtil that mimic TestExtractionDateUtil + public void testISO8601() throws IOException { +// dates with atypical years +// This test tries to mimic TestExtractionDateUtil#testISO8601 + +String[] dateStrings = { +"0001-01-01T01:01:01Z", "+12021-12-01T03:03:03Z", +"-04-04T04:04:04Z", "-0005-05-05T05:05:05Z", +"-2021-12-01T04:04:04Z", "-12021-12-01T02:02:02Z" +}; + +int id = 1; + +// ensure strings are parsed +for(String notInFormatDateString: dateStrings) { + IndexSchema schema = h.getCore().getLatestSchema(); + assertNotNull(schema.getFieldOrNull("date_dt")); // should match "*_dt" dynamic field + SolrInputDocument d = processAdd("parse-date-extraction-util-formats", doc(f("id", id), f("date_dt", notInFormatDateString))); + assertNotNull(d); + assertTrue("Date string: " + notInFormatDateString + " was not parsed as a date", d.getFieldValue("date_dt") instanceof Date); + assertEquals(notInFormatDateString, ((Date) d.getField("date_dt").getFirstValue()).toInstant().toString()); + assertU(commit()); + assertQ(req("id:" + id), "//date[@name='date_dt'][.='" + notInFormatDateString + "']"); + ++id; +} + +// odd values are date strings, even values are expected strings +String[] lenientDateStrings = { +"10995-12-31T23:59:59.990Z", "+10995-12-31T23:59:59.990Z", +"995-1-2T3:4:5Z", "0995-01-02T03:04:05Z", +"2021-01-01t03:04:05", "2021-01-01T03:04:05Z", +"2021-12-01 04:04:04", "2021-12-01T04:04:04Z" +}; + +// ensure sure strings that should be parsed using lenient resolver are properly parsed +for(int i = 0; i < lenientDateStrings.length; ++i) { + String lenientDateString = lenientDateStrings[i]; + String expectedString = lenientDateStrings[++i]; + IndexSchema schema = h.getCore().getLatestSchema(); + assertNotNull(schema.getFieldOrNull("date_dt")); // should match "*_dt" dynamic field + SolrInputDocument d = processAdd("parse-date-extraction-util-formats", doc(f("id", id), f("date_dt", lenientDateString))); + assertNotNull(d); + assertTrue("Date string: " + lenientDateString + " was not parsed as a date", + d.getFieldValue("date_dt") instanceof Date); + assertEquals(expectedString, ((Date) d.getField("date_dt").getFirstValue()).toInstant().toString()); + ++id; +} + } + + // this test has had problems when the JDK timezone is Americas/Metlakatla + public void testAKSTZone() throws IOException { +final String inputString = "Thu Nov 13 04:35:51 AKST 2008"; + +final long expectTs = 1226583351000L; +assertEquals(expectTs, +DateTimeFormatter.ofPattern("EEE MMM d HH:mm:ss z ", Locale.ENGLISH) +.withZone(ZoneId.of("UTC")).parse(inputString, Instant::from).toEpochMilli()); + +assertParsedDate(inputString, Date.from(Instant.ofEpochMilli(expectTs)), "parse-date-extraction-util-formats"); + } + + public void testNoTime() throws IOException { +Instant instant = instant(2005, 10, 7, 0, 0, 0); +String inputString = "2005-10-07"; +assertParsedDate(inputString, Date.from(instant), "parse-date-extraction-util-formats"); + } + + public void testRfc1123() throws IOException { +assertParsedDate("Fri, 07 Oct 2005 13:14:15 GMT", Date.from(inst20051007131415()), "parse-date-extraction-util-formats"); + } + + public void testRfc1036() throws IOException { +assertParsedDate("Friday, 07-Oct-05 13:14:15 GMT", Date.from(inst20051007131415()), "parse-date-extraction-util-formats"); + } + + @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-12593;) --- End diff -- This annotation should not be present. It was not present in `SOLR-12561.patch`. It is present today in TestExtractionDateUtil for the string "Thu Nov 13 04:35:51 AKST 2008" and you already have a test for that -- testAKSTZone. Since the URP is not afflicted with the bug causing me to add that AwaitsFix, we don't need the annotation here in this test class at all. If you want to try to see if the bug is gone, you need to set this system property like so:
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 288 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/288/ 2 tests failed. FAILED: org.apache.lucene.index.TestDemoParallelLeafReader.testRandom Error Message: this IndexWriter is closed Stack Trace: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed at __randomizedtesting.SeedInfo.seed([6DF97D118B0491D0:1FB5581E3A6427A3]:0) at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:671) at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:685) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1598) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1220) at org.apache.lucene.index.TestDemoParallelLeafReader.testRandom(TestDemoParallelLeafReader.java:1239) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:236) at org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ReindexingMergePolicy$ReindexingOneMerge.mergeFinished(TestDemoParallelLeafReader.java:544) at org.apache.lucene.index.IndexWriter.closeMergeReaders(IndexWriter.java:4333) at
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+25) - Build # 22652 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22652/ Java: 64bit/jdk-11-ea+25 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 24 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([ECA7329545CDB8CE]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:155) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([ECA7329545CDB8CE]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:155) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2537 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2537/ Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.AddReplicaTest.test Error Message: core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"http://127.0.0.1:39343/solr","node_name":"127.0.0.1:39343_solr","state":"active","type":"NRT","force_set_state":"false"} Stack Trace: java.lang.AssertionError: core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"http://127.0.0.1:39343/solr","node_name":"127.0.0.1:39343_solr","state":"active","type":"NRT","force_set_state":"false"} at __randomizedtesting.SeedInfo.seed([600A05C720F9336C:E85E3A1D8E055E94]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.AddReplicaTest.test(AddReplicaTest.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log:
[JENKINS] Lucene-Solr-Tests-master - Build # 2689 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2689/ 1 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState Error Message: Collection not found: deleteFromClusterState_false Stack Trace: org.apache.solr.common.SolrException: Collection not found: deleteFromClusterState_false at __randomizedtesting.SeedInfo.seed([1E64DCAC33E9D6C2:F0FD77C10CD7AD75]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:184) at org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:175) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 767 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/767/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest Error Message: Tlog size exceeds the max size bound. Tlog path: /export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J1/temp/solr.update.MaxSizeAutoCommitTest_F7221A9F99F991CC-001/init-core-data-001/tlog/tlog.001, tlog size: 1302 Stack Trace: java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: /export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J1/temp/solr.update.MaxSizeAutoCommitTest_F7221A9F99F991CC-001/init-core-data-001/tlog/tlog.001, tlog size: 1302 at __randomizedtesting.SeedInfo.seed([F7221A9F99F991CC:E76CFF60E257A83D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:382) at org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2536 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2536/ Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:36605/solr/awhollynewcollection_0_shard2_replica_n6: ClusterState says we are the leader (http://127.0.0.1:36605/solr/awhollynewcollection_0_shard2_replica_n6), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:36605/solr/awhollynewcollection_0_shard2_replica_n6: ClusterState says we are the leader (http://127.0.0.1:36605/solr/awhollynewcollection_0_shard2_replica_n6), but locally we don't think so. Request came from null at __randomizedtesting.SeedInfo.seed([97ECC742B916BBB3:DF99B3F6BF259426]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:464) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2013 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2013/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC No tests ran. Build Log: [...truncated 2718 lines...] ERROR: command execution failed. Archiving artifacts ERROR: Solaris VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= Agent went offline during the build ERROR: Connection was broken: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2679) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3154) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:862) at java.io.ObjectInputStream.(ObjectInputStream.java:358) at hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48) at hudson.remoting.Command.readFrom(Command.java:140) at hudson.remoting.Command.readFrom(Command.java:126) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63) Caused: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77) Build step 'Archive the artifacts' marked build as failure ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for Lucene-Solr-master-Solaris #2013 ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for Lucene-Solr-master-Solaris #2013 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ERROR: Solaris VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: Solaris VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: Solaris VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: Solaris VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 788 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/788/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 13216 lines...] [junit4] JVM J1: stdout was not empty, see: /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/temp/junit4-J1-20180811_181928_0621947000102548167644.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # SIGFPE (0x8) at pc=0x7fff90d49143, pid=27900, tid=0x4703 [junit4] # [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_172-b11) (build 1.8.0_172-b11) [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.172-b11 mixed mode bsd-amd64 ) [junit4] # Problematic frame: [junit4] # C [libsystem_kernel.dylib+0x11143] __commpage_gettimeofday+0x43 [junit4] # [junit4] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1/hs_err_pid27900.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J1: EOF [...truncated 1795 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home/jre/bin/java -XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/heapdumps -ea -esa -Dtests.prefix=tests -Dtests.seed=3E7AFD5FD7BBB961 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=7.5.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp -Djava.io.tmpdir=./temp -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/temp -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/clover/db -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/tools/junit4/solr-tests.policy -Dtests.LUCENE_VERSION=7.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true -Dtests.badapples=false -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dfile.encoding=ISO-8859-1 -classpath
[jira] [Commented] (SOLR-9804) Rule-Based Authorization Plugin does not secure access for update operations
[ https://issues.apache.org/jira/browse/SOLR-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577290#comment-16577290 ] Jan Høydahl commented on SOLR-9804: --- So if this due to collection:null or due to the zk version tag, and should we close this as invalid or should we change something? > Rule-Based Authorization Plugin does not secure access for update operations > > > Key: SOLR-9804 > URL: https://issues.apache.org/jira/browse/SOLR-9804 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: security >Affects Versions: 6.3 > Environment: Linux: > # uname -a > Linux hostname 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 > x86_64 x86_64 x86_64 GNU/Linux > /solr -version > 6.3.0 >Reporter: Sleem >Priority: Major > Labels: authorization, security, update > > It looks like the /update path is not filtered by the Rule-Based > Authorization Plugin. Even if you set permission using the path permission > "/update" or the pre-defined permission "update". Below is the security.json > {code:JavaScript} > { > "authentication":{ > "class":"solr.BasicAuthPlugin", > "blockUnknown":true, > "credentials":{ > "admin":"JrcQ8Lh/xKmucz9CaGVXwTpXxGSUZOt32i6W2f4tIfY= > PuAJx8DjI0Ozy2gQXteG5KfRAbOmXuRFZVjHbrIIzVk=", > "update":"tFdQLTQd9qXAStQek5xQQPlVcmXgjI/w4+9rjAZyqTU= > by0LXUAdNAtcJW+DuycI2zc4NyDjCiexOgMaqEFIklU=", > "solr":"GglOeZytbUBCKW8QT1H7kVs0eHc0x8+iNmpz7x8DKMI= > 5JR1Ul8QehmP3nb2U6Bc/N1qwrQljLfiKPTxm35FikA="}}, > "authorization":{ > "class":"solr.RuleBasedAuthorizationPlugin", > "user-role":{ > "admin":["admin_role"], > "update":["update_role"], > "solr":["read_role"]}, > "permissions":[ > { > "collection":null, > "name":"security-edit", > "role":["admin_role"], > "index":1}, > { > "collection":null, > "name":"schema-edit", > "role":["admin_role"], > "index":2}, > { > "collection":null, > "name":"config-edit", > "role":["admin_role"], > "index":3}, > { > "collection":null, > "name":"core-admin-edit", > "role":["admin_role"], > "index":4}, > { > "collection":null, > "name":"collection-admin-edit", > "role":["admin_role"], > "index":5}, > { > "collection":null, > "name":"security-read", > "role":["admin_role"], > "index":6}, > { > "collection":null, > "name":"schema-read", > "role":[ > "admin_role", > "update_role"], > "index":7}, > { > "collection":null, > "name":"core-admin-read", > "role":[ > "admin_role", > "update_role"], > "index":8}, > { > "collection":null, > "name":"config-read", > "role":[ > "admin_role", > "update_role"], > "index":9}, > { > "collection":null, > "name":"collection-admin-read", > "role":[ > "admin_role", > "update_role"], > "index":10}, > { > "collection":null, > "name":"update", > "role":[ > "admin_role", > "update_role"], > "index":11}, > { > "collection":null, > "name":"read", > "role":[ > "admin_role", > "update_role", > "read_role"], > "index":12}, > { > "collection":null, > "name":"all", > "role":["admin_role"], > "index":13}, > { > "collection":null, > "path":"/*", > "role":["admin_role"], > "index":14}], > "":{"v":138}}} > {code} > I have tested update using SolrJ and by hitting the /update on the browser > using the solr user (who has no rights to update). Both were suceeded update -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2687 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2687/ 1 tests failed. FAILED: org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef Error Message: ReaderPool is already closed Stack Trace: org.apache.lucene.store.AlreadyClosedException: ReaderPool is already closed at __randomizedtesting.SeedInfo.seed([F40306904D91D085:1D9E71A23B583778]:0) at org.apache.lucene.index.ReaderPool.get(ReaderPool.java:367) at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3319) at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:518) at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:398) at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332) at org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:465) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 1313 lines...] [junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexFileDeleter -Dtests.method=testExcInDecRef -Dtests.seed=F40306904D91D085 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=fr-LU -Dtests.timezone=AGT -Dtests.asserts=true
[jira] [Commented] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577272#comment-16577272 ] ASF subversion and git services commented on SOLR-12634: Commit 6759ba7290304de397f7bcb7ad353feadcebacbb in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6759ba7 ] SOLR-12634: Add gaussfit Stream Evaluator > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.5 > > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577264#comment-16577264 ] ASF subversion and git services commented on SOLR-12634: Commit 17eb8cd14d27d2680fe7c4b3871f3eb883542d34 in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17eb8cd ] SOLR-12634: Add gaussfit Stream Evaluator > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.5 > > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12659) HttpSolrClient.createMethod does not handle both stream and large data
Eirik Lygre created SOLR-12659: -- Summary: HttpSolrClient.createMethod does not handle both stream and large data Key: SOLR-12659 URL: https://issues.apache.org/jira/browse/SOLR-12659 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Affects Versions: 7.4, 6.6.2 Reporter: Eirik Lygre When using a ContentStreamUpdateRequest with stream data (through addContentStream()), all other parameters are passed on the URL, leading to the server failing with "URI is too large". The code below provokes the error using Solrj 7.4, but was first seen on Solr 6.6.2. The problem is in HttpSolrClient.createMethod(), where the presence of stream data leads to all other fields being put on the URL h2. Example code {code:java} String stringValue = StringUtils.repeat('X', 16*1024); SolrClient solr = new HttpSolrClient.Builder(BASE_URL).build(); ContentStreamUpdateRequest updateRequest = new ContentStreamUpdateRequest("/update/extract"); updateRequest.setParam("literal.id", "UriTooLargeTest-simpleTest"); updateRequest.setParam("literal.field", stringValue); updateRequest.addContentStream(new ContentStreamBase.StringStream(stringValue)); updateRequest.process(solr); {code} h2. The client sees the following error: {code:java} org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://server/solr/core: Expected mime type application/octet-stream but got text/html. Bad Message 414reason: URI Too Long {code} h2. Error fragment from HttpSolrClient.createMethod {code} if(contentWriter != null) { String fullQueryUrl = url + wparams.toQueryString(); HttpEntityEnclosingRequestBase postOrPut = SolrRequest.METHOD.POST == request.getMethod() ? new HttpPost(fullQueryUrl) : new HttpPut(fullQueryUrl); {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12634: -- Fix Version/s: 7.5 > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.5 > > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12634: -- Component/s: streaming expressions > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.5 > > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577254#comment-16577254 ] Joel Bernstein commented on SOLR-12634: --- Patch with basic implementation and tests. > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.5 > > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12634) Add gaussfit Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12634: -- Attachment: SOLR-12634.patch > Add gaussfit Stream Evaluator > - > > Key: SOLR-12634 > URL: https://issues.apache.org/jira/browse/SOLR-12634 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-12634.patch > > > The gaussFit Stream Evaluator fits a gaussian curve to a data set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 737 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/737/ Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 5 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: found:2[index.20180811101320691, index.20180811101322630, index.properties, replication.properties, snapshot_metadata] Stack Trace: java.lang.AssertionError: found:2[index.20180811101320691, index.20180811101322630, index.properties, replication.properties, snapshot_metadata] at __randomizedtesting.SeedInfo.seed([1FFE7AF1C60FD66A:C4557A37C327BFD9]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:969) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:940) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:916) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577227#comment-16577227 ] Bar Rotstein commented on SOLR-12591: - Hey [~dsmiley], I just filed a PR. Would be lovely if you could take a look at it. Thanks in advance. > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > Time Spent: 20m > Remaining Estimate: 0h > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
Github user barrotsteindev commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/435#discussion_r209430852 --- Diff: solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java --- @@ -896,13 +899,125 @@ public void testCascadingParsers() throws Exception { assertTrue(mixedDates.isEmpty()); } - private Date parse(DateTimeFormatter dateTimeFormatter, String dateString) { + // TestsExtractionDateUtil that mimic TestExtractionDateUtil + public void testISO8601() throws IOException { +// dates with atypical years +// This test tries to mimic TestExtractionDateUtil#testISO8601 + +String[] dateStrings = { +"0001-01-01T01:01:01Z", "+12021-12-01T03:03:03Z", +"-04-04T04:04:04Z", "-0005-05-05T05:05:05Z", +"-2021-12-01T04:04:04Z", "-12021-12-01T02:02:02Z" +}; + +int id = 1; + +// ensure strings are parsed +for(String notInFormatDateString: dateStrings) { + IndexSchema schema = h.getCore().getLatestSchema(); + assertNotNull(schema.getFieldOrNull("date_dt")); // should match "*_dt" dynamic field + SolrInputDocument d = processAdd("parse-date-extraction-util-formats", doc(f("id", id), f("date_dt", notInFormatDateString))); + assertNotNull(d); + assertTrue("Date string: " + notInFormatDateString + " was not parsed as a date", d.getFieldValue("date_dt") instanceof Date); + assertEquals(notInFormatDateString, ((Date) d.getField("date_dt").getFirstValue()).toInstant().toString()); + assertU(commit()); + assertQ(req("id:" + id), "//date[@name='date_dt'][.='" + notInFormatDateString + "']"); + ++id; +} + +// odd values are date strings, even values are expected strings +String[] lenientDateStrings = { +"10995-12-31T23:59:59.990Z", "+10995-12-31T23:59:59.990Z", +"995-1-2T3:4:5Z", "0995-01-02T03:04:05Z", +"2021-01-01t03:04:05", "2021-01-01T03:04:05Z", +"2021-12-01 04:04:04", "2021-12-01T04:04:04Z" +}; + +// ensure sure strings that should be parsed using lenient resolver are properly parsed +for(int i = 0; i < lenientDateStrings.length; ++i) { + String lenientDateString = lenientDateStrings[i]; + String expectedString = lenientDateStrings[++i]; + IndexSchema schema = h.getCore().getLatestSchema(); + assertNotNull(schema.getFieldOrNull("date_dt")); // should match "*_dt" dynamic field + SolrInputDocument d = processAdd("parse-date-extraction-util-formats", doc(f("id", id), f("date_dt", lenientDateString))); + assertNotNull(d); + assertTrue("Date string: " + lenientDateString + " was not parsed as a date", + d.getFieldValue("date_dt") instanceof Date); + assertEquals(expectedString, ((Date) d.getField("date_dt").getFirstValue()).toInstant().toString()); + ++id; +} + } + + // this test has had problems when the JDK timezone is Americas/Metlakatla + public void testAKSTZone() throws IOException { +final String inputString = "Thu Nov 13 04:35:51 AKST 2008"; + +final long expectTs = 1226583351000L; +assertEquals(expectTs, +DateTimeFormatter.ofPattern("EEE MMM d HH:mm:ss z ", Locale.ENGLISH) +.withZone(ZoneId.of("UTC")).parse(inputString, Instant::from).toEpochMilli()); + +assertParsedDate(inputString, Date.from(Instant.ofEpochMilli(expectTs)), "parse-date-extraction-util-formats"); + } + + public void testNoTime() throws IOException { +Instant instant = instant(2005, 10, 7, 0, 0, 0); +String inputString = "2005-10-07"; +assertParsedDate(inputString, Date.from(instant), "parse-date-extraction-util-formats"); + } + + public void testRfc1123() throws IOException { +assertParsedDate("Fri, 07 Oct 2005 13:14:15 GMT", Date.from(inst20051007131415()), "parse-date-extraction-util-formats"); + } + + public void testRfc1036() throws IOException { +assertParsedDate("Friday, 07-Oct-05 13:14:15 GMT", Date.from(inst20051007131415()), "parse-date-extraction-util-formats"); + } + + @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-12593;) --- End diff -- This seems to pass as is, although I am still hesitant, perhaps I missed something? Would be nice if another set of eyes looked at this one. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #435: SOLR-12591: add tests to ensure ParseDateFiel...
GitHub user barrotsteindev opened a pull request: https://github.com/apache/lucene-solr/pull/435 SOLR-12591: add tests to ensure ParseDateFieldUpdateProcessor compati⦠â¦blity with ExtractionDateUtil You can merge this pull request into a Git repository by running: $ git pull https://github.com/barrotsteindev/lucene-solr SOLR-12591 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/435.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #435 commit 04c60c0580096a02efd651adce5946b8a8561acc Author: Bar Rotstein Date: 2018-08-11T16:10:46Z SOLR-12591: add tests to ensure ParseDateFieldUpdateProcessor compatiblity with ExtractionDateUtil --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2685 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2685/ 2 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState Error Message: Collection not found: deleteFromClusterState_false Stack Trace: org.apache.solr.common.SolrException: Collection not found: deleteFromClusterState_false at __randomizedtesting.SeedInfo.seed([F4D7C45EB1F7CEC3:1A4E6F338EC9B574]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:184) at org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:175) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Updated] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic
[ https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-8231: -- Component/s: modules/analysis > Nori, a Korean analyzer based on mecab-ko-dic > - > > Key: LUCENE-8231 > URL: https://issues.apache.org/jira/browse/LUCENE-8231 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Jim Ferenczi >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch > > > There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic: > It is available under an Apache license here: > https://bitbucket.org/eunjeon/mecab-ko-dic > This dictionary was built with MeCab, it defines a format for the features > adapted for the Korean language. > Since the Kuromoji tokenizer uses the same format for the morphological > analysis (left cost + right cost + word cost) I tried to adapt the module to > handle Korean with the mecab-ko-dic. I've started with a POC that copies the > Kuromoji module and adapts it for the mecab-ko-dic. > I used the same classes to build and read the dictionary but I had to make > some modifications to handle the differences with the IPADIC and Japanese. > The resulting binary dictionary takes 28MB on disk, it's bigger than the > IPADIC but mainly because the source is bigger and there are a lot of > compound and inflect terms that define a group of terms and the segmentation > that can be applied. > I attached the patch that contains this new Korean module called -godori- > nori. It is an adaptation of the Kuromoji module so currently > the two modules don't share any code. I wanted to validate the approach first > and check the relevancy of the results. I don't speak Korean so I used the > relevancy > tests that was added for another Korean tokenizer > (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output > against mecab-ko which is the official fork of mecab to use the mecab-ko-dic. > I had to simplify the JapaneseTokenizer, my version removes the nBest output > and the decomposition of too long tokens. I also > modified the handling of whitespaces since they are important in Korean. > Whitespaces that appear before a term are attached to that term and this > information is used to compute a penalty based on the Part of Speech of the > token. The penalty cost is a feature added to mecab-ko to handle > morphemes that should not appear after a morpheme and is described in the > mecab-ko page: > https://bitbucket.org/eunjeon/mecab-ko > Ignoring whitespaces is also more inlined with the official MeCab library > which attach the whitespaces to the term that follows. > I also added a decompounder filter that expand the compounds and inflects > defined in the dictionary and a part of speech filter similar to the Japanese > that removes the morpheme that are not useful for relevance (suffix, prefix, > interjection, ...). These filters don't play well with the tokenizer if it > can > output multiple paths (nBest output for instance) so for simplicity I removed > this ability and the Korean tokenizer only outputs the best path. > I compared the result with mecab-ko to confirm that the analyzer is working > and ran the relevancy test that is defined in HantecRel.java included > in the patch (written by Robert for another Korean analyzer). Here are the > results: > ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)|| > |Standard|35s|131MB|.007|.1044|.1053| > |CJK|36s|164MB|.1418|.1924|.1916| > |Korean|212s|90MB|.1628|.2094|.2078| > I find the results very promising so I plan to continue to work on this > project. I started to extract the part of the code that could be shared with > the > Kuromoji module but I wanted to share the status and this POC first to > confirm that this approach is viable. The advantages of using the same model > than > the Japanese analyzer are multiple: we don't have a Korean analyzer at the > moment ;), the resulting dictionary is small compared to other libraries that > use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the > lattice on the fly to select the best path efficiently. > The dictionary can be built directly from the godori module with the > following command: > ant regenerate (you need to create the resource directory (mkdir > lucene/analysis/godori/src/resources/org/apache/lucene/analysis/ko/dict) > first since the dictionary is not included in the patch). > I've also added some minimal tests in the
[jira] [Updated] (SOLR-12255) ref guide updates for new "nori" analyis module for Korean
[ https://issues.apache.org/jira/browse/SOLR-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-12255: - Affects Version/s: 7.4 > ref guide updates for new "nori" analyis module for Korean > -- > > Key: SOLR-12255 > URL: https://issues.apache.org/jira/browse/SOLR-12255 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, documentation >Affects Versions: 7.4 >Reporter: Hoss Man >Priority: Major > > LUCENE-8231 added a new "nori" analysis module -- similar for the "Kuromoji" > module but for Korean text. > We should update the ref guide to mention how to use the new module & > configure the new Factories. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12658) Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598
[ https://issues.apache.org/jira/browse/SOLR-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577210#comment-16577210 ] Amrit Sarkar commented on SOLR-12658: - Next step is to perform benchmarking to make sure there are no performance implications. Will share the results soon. > Extend support for more than 4 field in 'partitionKeys' in ParallelStream > after SOLR-11598 > -- > > Key: SOLR-12658 > URL: https://issues.apache.org/jira/browse/SOLR-12658 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > Attachments: SOLR-12658.patch > > > SOLR-11598 extended the capabilities for Export handler to have more than 4 > fields for sorting. > As streaming expressions leverages Export handler, ParallelStream allowed > maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored > rest of the fields if more than 4 are specified. > HashQParserPlugin:CompositeHash: 347 > {code} > private static class CompositeHash implements HashKey { > private HashKey key1; > private HashKey key2; > private HashKey key3; > private HashKey key4; > public CompositeHash(HashKey[] hashKeys) { > key1 = hashKeys[0]; > key2 = hashKeys[1]; > key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); > key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); > } > public void setNextReader(LeafReaderContext context) throws IOException { > key1.setNextReader(context); > key2.setNextReader(context); > key3.setNextReader(context); > key4.setNextReader(context); > } > public long hashCode(int doc) throws IOException { > return > key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); > } > } > {code} > To make sure we have documents distributed across workers when executing > streaming expression parallely, all the fields specified in 'partitionKeys' > should be considered in calculating to which worker particular document > should go for further processing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12658) Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598
[ https://issues.apache.org/jira/browse/SOLR-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12658: Attachment: SOLR-12658.patch > Extend support for more than 4 field in 'partitionKeys' in ParallelStream > after SOLR-11598 > -- > > Key: SOLR-12658 > URL: https://issues.apache.org/jira/browse/SOLR-12658 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > Attachments: SOLR-12658.patch > > > SOLR-11598 extended the capabilities for Export handler to have more than 4 > fields for sorting. > As streaming expressions leverages Export handler, ParallelStream allowed > maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored > rest of the fields if more than 4 are specified. > HashQParserPlugin:CompositeHash: 347 > {code} > private static class CompositeHash implements HashKey { > private HashKey key1; > private HashKey key2; > private HashKey key3; > private HashKey key4; > public CompositeHash(HashKey[] hashKeys) { > key1 = hashKeys[0]; > key2 = hashKeys[1]; > key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); > key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); > } > public void setNextReader(LeafReaderContext context) throws IOException { > key1.setNextReader(context); > key2.setNextReader(context); > key3.setNextReader(context); > key4.setNextReader(context); > } > public long hashCode(int doc) throws IOException { > return > key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); > } > } > {code} > To make sure we have documents distributed across workers when executing > streaming expression parallely, all the fields specified in 'partitionKeys' > should be considered in calculating to which worker particular document > should go for further processing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12658) Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598
[ https://issues.apache.org/jira/browse/SOLR-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577209#comment-16577209 ] Amrit Sarkar commented on SOLR-12658: - Attached patch which considers all field values specified under 'partitionKeys' for calculating {{CompositeHash}}, responsible for calculating worker a document belongs for processing. > Extend support for more than 4 field in 'partitionKeys' in ParallelStream > after SOLR-11598 > -- > > Key: SOLR-12658 > URL: https://issues.apache.org/jira/browse/SOLR-12658 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > > SOLR-11598 extended the capabilities for Export handler to have more than 4 > fields for sorting. > As streaming expressions leverages Export handler, ParallelStream allowed > maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored > rest of the fields if more than 4 are specified. > HashQParserPlugin:CompositeHash: 347 > {code} > private static class CompositeHash implements HashKey { > private HashKey key1; > private HashKey key2; > private HashKey key3; > private HashKey key4; > public CompositeHash(HashKey[] hashKeys) { > key1 = hashKeys[0]; > key2 = hashKeys[1]; > key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); > key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); > } > public void setNextReader(LeafReaderContext context) throws IOException { > key1.setNextReader(context); > key2.setNextReader(context); > key3.setNextReader(context); > key4.setNextReader(context); > } > public long hashCode(int doc) throws IOException { > return > key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); > } > } > {code} > To make sure we have documents distributed across workers when executing > streaming expression parallely, all the fields specified in 'partitionKeys' > should be considered in calculating to which worker particular document > should go for further processing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12658) Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598
[ https://issues.apache.org/jira/browse/SOLR-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12658: Description: SOLR-11598 extended the capabilities for Export handler to have more than 4 fields for sorting. As streaming expressions leverages Export handler, ParallelStream allowed maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored rest of the fields if more than 4 are specified. HashQParserPlugin:CompositeHash: 347 {code} private static class CompositeHash implements HashKey { private HashKey key1; private HashKey key2; private HashKey key3; private HashKey key4; public CompositeHash(HashKey[] hashKeys) { key1 = hashKeys[0]; key2 = hashKeys[1]; key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); } public void setNextReader(LeafReaderContext context) throws IOException { key1.setNextReader(context); key2.setNextReader(context); key3.setNextReader(context); key4.setNextReader(context); } public long hashCode(int doc) throws IOException { return key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); } } {code} To make sure we have documents distributed across workers when executing streaming expression parallely, all the fields specified in 'partitionKeys' should be considered in calculating to which worker particular document should go for further processing. was: SOLR-11598 extended the capabilities for Export handler to have more than 4 fields for sorting. As streaming expressions leverages Export handler, ParallelStream allowed maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored rest of the fields if more than 4 are specified. HashQParserPlugin:CompositeHash: 347 {code} private static class CompositeHash implements HashKey { private HashKey key1; private HashKey key2; private HashKey key3; private HashKey key4; public CompositeHash(HashKey[] hashKeys) { key1 = hashKeys[0]; key2 = hashKeys[1]; key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); } public void setNextReader(LeafReaderContext context) throws IOException { key1.setNextReader(context); key2.setNextReader(context); key3.setNextReader(context); key4.setNextReader(context); } public long hashCode(int doc) throws IOException { return key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); } } {code} To make sure we have documents distributed across workers when executing streaming expression parallely, all the fields specified in 'partitionKeys' should be considered in calculating which worker particular document should go for further processing. > Extend support for more than 4 field in 'partitionKeys' in ParallelStream > after SOLR-11598 > -- > > Key: SOLR-12658 > URL: https://issues.apache.org/jira/browse/SOLR-12658 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > > SOLR-11598 extended the capabilities for Export handler to have more than 4 > fields for sorting. > As streaming expressions leverages Export handler, ParallelStream allowed > maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored > rest of the fields if more than 4 are specified. > HashQParserPlugin:CompositeHash: 347 > {code} > private static class CompositeHash implements HashKey { > private HashKey key1; > private HashKey key2; > private HashKey key3; > private HashKey key4; > public CompositeHash(HashKey[] hashKeys) { > key1 = hashKeys[0]; > key2 = hashKeys[1]; > key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); > key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); > } > public void setNextReader(LeafReaderContext context) throws IOException { > key1.setNextReader(context); > key2.setNextReader(context); > key3.setNextReader(context); > key4.setNextReader(context); > } > public long hashCode(int doc) throws IOException { > return > key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); > } > } > {code} > To make sure we have documents distributed across workers when executing > streaming expression parallely, all the fields specified in 'partitionKeys' > should be considered in calculating to which worker particular document > should go for
[jira] [Updated] (SOLR-12658) Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598
[ https://issues.apache.org/jira/browse/SOLR-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12658: Summary: Extend support for more than 4 field in 'partitionKeys' in ParallelStream after SOLR-11598 (was: Extend support for more than 4 "partitionKeys" in ParallelStream after SOLR-11598) > Extend support for more than 4 field in 'partitionKeys' in ParallelStream > after SOLR-11598 > -- > > Key: SOLR-12658 > URL: https://issues.apache.org/jira/browse/SOLR-12658 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > > SOLR-11598 extended the capabilities for Export handler to have more than 4 > fields for sorting. > As streaming expressions leverages Export handler, ParallelStream allowed > maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored > rest of the fields if more than 4 are specified. > HashQParserPlugin:CompositeHash: 347 > {code} > private static class CompositeHash implements HashKey { > private HashKey key1; > private HashKey key2; > private HashKey key3; > private HashKey key4; > public CompositeHash(HashKey[] hashKeys) { > key1 = hashKeys[0]; > key2 = hashKeys[1]; > key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); > key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); > } > public void setNextReader(LeafReaderContext context) throws IOException { > key1.setNextReader(context); > key2.setNextReader(context); > key3.setNextReader(context); > key4.setNextReader(context); > } > public long hashCode(int doc) throws IOException { > return > key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); > } > } > {code} > To make sure we have documents distributed across workers when executing > streaming expression parallely, all the fields specified in 'partitionKeys' > should be considered in calculating which worker particular document should > go for further processing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12658) Extend support for more than 4 "partitionKeys" in ParallelStream after SOLR-11598
Amrit Sarkar created SOLR-12658: --- Summary: Extend support for more than 4 "partitionKeys" in ParallelStream after SOLR-11598 Key: SOLR-12658 URL: https://issues.apache.org/jira/browse/SOLR-12658 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: streaming expressions Reporter: Amrit Sarkar SOLR-11598 extended the capabilities for Export handler to have more than 4 fields for sorting. As streaming expressions leverages Export handler, ParallelStream allowed maximum 4 fields in "{color:blue}partitionKeys{color}" and silently ignored rest of the fields if more than 4 are specified. HashQParserPlugin:CompositeHash: 347 {code} private static class CompositeHash implements HashKey { private HashKey key1; private HashKey key2; private HashKey key3; private HashKey key4; public CompositeHash(HashKey[] hashKeys) { key1 = hashKeys[0]; key2 = hashKeys[1]; key3 = (hashKeys.length > 2) ? hashKeys[2] : new ZeroHash(); key4 = (hashKeys.length > 3) ? hashKeys[3] : new ZeroHash(); } public void setNextReader(LeafReaderContext context) throws IOException { key1.setNextReader(context); key2.setNextReader(context); key3.setNextReader(context); key4.setNextReader(context); } public long hashCode(int doc) throws IOException { return key1.hashCode(doc)+key2.hashCode(doc)+key3.hashCode(doc)+key4.hashCode(doc); } } {code} To make sure we have documents distributed across workers when executing streaming expression parallely, all the fields specified in 'partitionKeys' should be considered in calculating which worker particular document should go for further processing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2534 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2534/ Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection Error Message: Error from server at https://127.0.0.1:34135/solr: Could not find collection : testDeleteWithCollection_abc Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:34135/solr: Could not find collection : testDeleteWithCollection_abc at __randomizedtesting.SeedInfo.seed([DB3D593C1FFC92E1:A0E4AED14666405F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection(TestWithCollection.java:197) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577206#comment-16577206 ] Bar Rotstein commented on SOLR-12591: - Alright it seems like the way forward is to currently put the Date into the document instead of the String. One thing that should be documented is the different Date String format that will be returned due to this change. Though this probably belongs in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593] > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577197#comment-16577197 ] David Smiley commented on SOLR-12591: - {{org.apache.solr.handler.extraction.SolrContentHandler#transformValue}} is placing the ISO instant format _String_ back onto the document, not an Instance object. You're correct the URP puts a Date object on the doc; that's what it _should_ do (avoids further parsing). It will ultimately find its way into {{DatePointField.createField}} and used directly. Again after SOLR-12593, the SolrContentHandler will be doing no date processing whatsoever. There will be only one place where a configurable list of date/time patterns are processed -- this URP. Someone using SolrContentHandler will be expected to use the URP. In some future issue that I don't think has been filed, it would be nice to use Instant basically everywhere and avoid Date. > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7469 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7469/ Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC 82 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: found:2[index.20180811181532797, index.20180811181534437, index.properties, replication.properties, snapshot_metadata] Stack Trace: java.lang.AssertionError: found:2[index.20180811181532797, index.20180811181534437, index.properties, replication.properties, snapshot_metadata] at __randomizedtesting.SeedInfo.seed([F7731712432A04EB:2CD817D446026D58]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:969) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:940) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:916) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 287 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/287/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.prometheus.collector.SolrCollectorTest Error Message: Error from server at https://127.0.0.1:41854/solr: KeeperErrorCode = Session expired for /configs/collection1_config Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:41854/solr: KeeperErrorCode = Session expired for /configs/collection1_config at __randomizedtesting.SeedInfo.seed([838659071DDEFA6F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.prometheus.exporter.SolrExporterTestBase.setupCluster(SolrExporterTestBase.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR Error Message: No live SolrServers available to handle this request Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request at __randomizedtesting.SeedInfo.seed([A9712752BF5CD443:F3E91D94C1DCB3A4]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at
[JENKINS] Lucene-Solr-repro - Build # 1196 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-repro/1196/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1610/consoleText [repro] Revision: cdc0959afcc3d64cbc0f42951964979680b080a0 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=TestLatLonLineShapeQueries -Dtests.method=testRandomBig -Dtests.seed=D36D4582A546D584 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sr-RS -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 928b92caa0bcbff2288b5bf2ab602ec04ff88a78 [repro] git fetch [repro] git checkout cdc0959afcc3d64cbc0f42951964979680b080a0 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]lucene/sandbox [repro] TestLatLonLineShapeQueries [repro] ant compile-test [...truncated 175 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestLatLonLineShapeQueries" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=D36D4582A546D584 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sr-RS -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 307 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 5/5 failed: org.apache.lucene.document.TestLatLonLineShapeQueries [repro] Re-testing 100% failures at the tip of master [repro] git fetch [...truncated 2 lines...] [repro] Setting last failure code to 32768 [repro] Traceback (most recent call last): [...truncated 3 lines...] raise RuntimeError('ERROR: "git fetch" failed. See above.') RuntimeError: ERROR: "git fetch" failed. See above. [repro] git checkout 928b92caa0bcbff2288b5bf2ab602ec04ff88a78 Previous HEAD position was cdc0959... Ref Guide: small typos; i.e. and e.g. cleanups HEAD is now at 928b92c... SOLR-12655: Add Korean morphological analyzer ("nori") to default distribution. This also adds examples for configuration in Solr's schema Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22648 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22648/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection Error Message: Error from server at https://127.0.0.1:37503/solr: Could not find collection : testDeleteWithCollection_abc Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:37503/solr: Could not find collection : testDeleteWithCollection_abc at __randomizedtesting.SeedInfo.seed([8105576F0DA2E7BD:FADCA08254383503]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection(TestWithCollection.java:197) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+25) - Build # 2533 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2533/ Java: 64bit/jdk-11-ea+25 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 21 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([1223A9FC6D1F4989]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:155) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([1223A9FC6D1F4989]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:155) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (SOLR-12255) ref guide updates for new "nori" analyis module for Korean
[ https://issues.apache.org/jira/browse/SOLR-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577166#comment-16577166 ] Uwe Schindler commented on SOLR-12255: -- OK, the documentation in Lucene was updated. Also example text fragments were added to Solr's schema. The JAR file is also shipped with solr now. Only the ref guide updates are missing. I am not familar with asciidoc, so somebody with experience should do this. Text fragments can be collected from schema files and Lucene's documentation. > ref guide updates for new "nori" analyis module for Korean > -- > > Key: SOLR-12255 > URL: https://issues.apache.org/jira/browse/SOLR-12255 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, documentation >Reporter: Hoss Man >Priority: Major > > LUCENE-8231 added a new "nori" analysis module -- similar for the "Kuromoji" > module but for Korean text. > We should update the ref guide to mention how to use the new module & > configure the new Factories. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1195 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1195/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/122/consoleText [repro] Revision: cdc0959afcc3d64cbc0f42951964979680b080a0 [repro] Repro line: ant test -Dtestcase=CdcrBidirectionalTest -Dtests.method=testBiDir -Dtests.seed=9B5CEDAD8EB4292D -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 928b92caa0bcbff2288b5bf2ab602ec04ff88a78 [repro] git fetch [repro] git checkout cdc0959afcc3d64cbc0f42951964979680b080a0 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] CdcrBidirectionalTest [repro] ant compile-test [...truncated 3317 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CdcrBidirectionalTest" -Dtests.showOutput=onerror -Dtests.seed=9B5CEDAD8EB4292D -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 3660 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest [repro] git checkout 928b92caa0bcbff2288b5bf2ab602ec04ff88a78 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12655) Add Korean analyzer JAR file (NORI) and schema.xml example to Solr
[ https://issues.apache.org/jira/browse/SOLR-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved SOLR-12655. -- Resolution: Fixed Fix Version/s: 7.5 master (8.0) > Add Korean analyzer JAR file (NORI) and schema.xml example to Solr > -- > > Key: SOLR-12655 > URL: https://issues.apache.org/jira/browse/SOLR-12655 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, Schema and Analysis >Affects Versions: 7.4 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12655.patch, screenshot-1.png > > > In Lucene 7.4 we added the NORI analyzer for Korean. In contrast to Kuromoji, > the JAR file is missing in the distribution (the analyzers-kuromoji is part > of main solr distribution). We should also add an updated/new "text_ko" field > in the default schema. > See also SOLR-12255 about the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12655) Add Korean analyzer JAR file (NORI) and schema.xml example to Solr
[ https://issues.apache.org/jira/browse/SOLR-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577162#comment-16577162 ] ASF subversion and git services commented on SOLR-12655: Commit 489a9157791efe8d26f86fc448bed5992b3d2d5f in lucene-solr's branch refs/heads/branch_7x from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=489a915 ] SOLR-12655: Add Korean morphological analyzer ("nori") to default distribution. This also adds examples for configuration in Solr's schema > Add Korean analyzer JAR file (NORI) and schema.xml example to Solr > -- > > Key: SOLR-12655 > URL: https://issues.apache.org/jira/browse/SOLR-12655 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, Schema and Analysis >Affects Versions: 7.4 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > Attachments: SOLR-12655.patch, screenshot-1.png > > > In Lucene 7.4 we added the NORI analyzer for Korean. In contrast to Kuromoji, > the JAR file is missing in the distribution (the analyzers-kuromoji is part > of main solr distribution). We should also add an updated/new "text_ko" field > in the default schema. > See also SOLR-12255 about the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12655) Add Korean analyzer JAR file (NORI) and schema.xml example to Solr
[ https://issues.apache.org/jira/browse/SOLR-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577160#comment-16577160 ] ASF subversion and git services commented on SOLR-12655: Commit 928b92caa0bcbff2288b5bf2ab602ec04ff88a78 in lucene-solr's branch refs/heads/master from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=928b92c ] SOLR-12655: Add Korean morphological analyzer ("nori") to default distribution. This also adds examples for configuration in Solr's schema > Add Korean analyzer JAR file (NORI) and schema.xml example to Solr > -- > > Key: SOLR-12655 > URL: https://issues.apache.org/jira/browse/SOLR-12655 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, Schema and Analysis >Affects Versions: 7.4 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > Attachments: SOLR-12655.patch, screenshot-1.png > > > In Lucene 7.4 we added the NORI analyzer for Korean. In contrast to Kuromoji, > the JAR file is missing in the distribution (the analyzers-kuromoji is part > of main solr distribution). We should also add an updated/new "text_ko" field > in the default schema. > See also SOLR-12255 about the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12655) Add Korean analyzer JAR file (NORI) and schema.xml example to Solr
[ https://issues.apache.org/jira/browse/SOLR-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577157#comment-16577157 ] Uwe Schindler commented on SOLR-12655: -- Here is the patch: [^SOLR-12655.patch] > Add Korean analyzer JAR file (NORI) and schema.xml example to Solr > -- > > Key: SOLR-12655 > URL: https://issues.apache.org/jira/browse/SOLR-12655 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, Schema and Analysis >Affects Versions: 7.4 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > Attachments: SOLR-12655.patch, screenshot-1.png > > > In Lucene 7.4 we added the NORI analyzer for Korean. In contrast to Kuromoji, > the JAR file is missing in the distribution (the analyzers-kuromoji is part > of main solr distribution). We should also add an updated/new "text_ko" field > in the default schema. > See also SOLR-12255 about the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12655) Add Korean analyzer JAR file (NORI) and schema.xml example to Solr
[ https://issues.apache.org/jira/browse/SOLR-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-12655: - Attachment: SOLR-12655.patch > Add Korean analyzer JAR file (NORI) and schema.xml example to Solr > -- > > Key: SOLR-12655 > URL: https://issues.apache.org/jira/browse/SOLR-12655 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Build, Schema and Analysis >Affects Versions: 7.4 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > Attachments: SOLR-12655.patch, screenshot-1.png > > > In Lucene 7.4 we added the NORI analyzer for Korean. In contrast to Kuromoji, > the JAR file is missing in the distribution (the analyzers-kuromoji is part > of main solr distribution). We should also add an updated/new "text_ko" field > in the default schema. > See also SOLR-12255 about the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1610 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1610/ 1 tests failed. FAILED: org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([D36D4582A546D584:543A380D341FA904]:0) at org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:106) at org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:127) at org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68) at org.apache.lucene.util.bkd.OfflinePointReader.split(OfflinePointReader.java:169) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1787) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1004) at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.writeField(Lucene60PointsWriter.java:130) at org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62) at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:223) at org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4436) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4058) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2160) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5093) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1602) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1218) at org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:214) at org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:192) at org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:173) at org.apache.lucene.document.BaseLatLonShapeTestCase.testRandomBig(BaseLatLonShapeTestCase.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Build Log: [...truncated 10032 lines...] [junit4] Suite: org.apache.lucene.document.TestLatLonLineShapeQueries [junit4] 2> NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory. [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLatLonLineShapeQueries -Dtests.method=testRandomBig -Dtests.seed=D36D4582A546D584 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sr-RS -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR232s J2 | TestLatLonLineShapeQueries.testRandomBig <<< [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space [junit4]>at __randomizedtesting.SeedInfo.seed([D36D4582A546D584:543A380D341FA904]:0) [junit4]>at org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:106) [junit4]>at org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:127) [junit4]>at org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68) [junit4]>at org.apache.lucene.util.bkd.OfflinePointReader.split(OfflinePointReader.java:169) [junit4]>at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1787) [junit4]>at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) [junit4]>at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) [junit4]>at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) [junit4]>at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801) [junit4]>at
[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577145#comment-16577145 ] Bar Rotstein commented on SOLR-12591: - After thinking about this for a bit, it seems to me as if having a single point of date extraction as well as a single string format(Date#toString) for dates in Solr would be for the better. Although, I must admit, I am fairly new to this. Perhaps someone with more experience could provide his opinion. [~dsmiley], WDYT? > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8451) GeoPolygon test failure
[ https://issues.apache.org/jira/browse/LUCENE-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Wright resolved LUCENE-8451. - > GeoPolygon test failure > --- > > Key: LUCENE-8451 > URL: https://issues.apache.org/jira/browse/LUCENE-8451 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, master (8.0), 7.5. > > Attachments: LUCENE-8451.patch > > > [junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareBigPolygons > -Dtests.seed=2C88B3DA273BE2DF -Dtests.multiplier=3 -Dtests.slow=true > -Dtests.locale=en-TC -Dtests.timezone=Europe/Budapest -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] FAILURE 0.01s J0 | RandomGeoPolygonTest.testCompareBigPolygons > \{seed=[2C88B3DA273BE2DF:5742535E2813B1BD]} <<< > [junit4] > Throwable #1: java.lang.AssertionError: Polygon failed to > build with an exception: > [junit4] > [[lat=1.5408708232037775E-28, lon=0.0([X=1.0011188539924791, > Y=0.0, Z=1.5425948326762136E-28])], [lat=-0.42051952071345244, > lon=-0.043956709579662245([X=0.912503274975597, Y=-0.04013649525500056, > Z=-0.40846219882801177])], [lat=0.6967302798374987, > lon=-1.5354076311466454([X=0.027128243251908137, Y=-0.7662593106632875, > Z=0.641541793498374])], [lat=0.6093302043457702, > lon=-1.5374202165648532([X=0.02736481119831758, Y=-0.8195876964154789, > Z=0.5723273145651325])], [lat=1.790840712772793E-12, > lon=4.742872761198669E-13([X=1.0011188539924791, Y=4.748179343323357E-13, > Z=1.792844402054173E-12])], [lat=-1.4523595845716656E-12, > lon=9.592326932761353E-13([X=1.0011188539924791, Y=9.603059346047237E-13, > Z=-1.4539845628913788E-12])], [lat=0.29556330360208455, > lon=1.5414988021120735([X=0.02804645884597515, Y=0.957023986775941, > Z=0.2915213382500179])]] > [junit4] > WKT:POLYGON((-2.5185339401969213 -24.093993739745027,0.0 > 8.828539494442529E-27,5.495998489568957E-11 > -8.321407453133E-11,2.7174659198424288E-11 > 1.0260761462208114E-10,88.32137548549387 > 16.934529875343248,-87.97237709688223 39.91970449365747,-88.0876897472551 > 34.91204903885665,-2.5185339401969213 -24.093993739745027)) > [junit4] > java.lang.IllegalArgumentException: Convex polygon has a > side that is more than 180 degrees > [junit4] > at > __randomizedtesting.SeedInfo.seed([2C88B3DA273BE2DF:5742535E2813B1BD]:0) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:163) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [junit4] > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [junit4] > at > java.base/java.lang.reflect.Method.invoke(Method.java:564) > [junit4] > at java.base/java.lang.Thread.run(Thread.java:844) > [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, > docValues:{}, maxPointsInLeafNode=1403, maxMBSortInHeap=5.306984579448146, > sim=RandomSimilarity(queryNorm=false): {}, locale=en-TC, > timezone=Europe/Budapest > [junit4] 2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 9.0.4 > (64-bit)/cpus=8,threads=1,free=296447064,total=536870912 > [junit4] 2> NOTE: All tests run in this JVM: [GeoPointTest, > GeoExactCircleTest, TestGeo3DDocValues, RandomGeoPolygonTest] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8451) GeoPolygon test failure
[ https://issues.apache.org/jira/browse/LUCENE-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577142#comment-16577142 ] ASF subversion and git services commented on LUCENE-8451: - Commit a96ef91ab9c3e0754a6d738759c30c63cc05dc4c in lucene-solr's branch refs/heads/branch_6x from [~daddywri] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a96ef91 ] LUCENE-8451: Interpret IllegalArgumentException result from convex polygon constructor as meaning a tiling failure. > GeoPolygon test failure > --- > > Key: LUCENE-8451 > URL: https://issues.apache.org/jira/browse/LUCENE-8451 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8451.patch > > > [junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareBigPolygons > -Dtests.seed=2C88B3DA273BE2DF -Dtests.multiplier=3 -Dtests.slow=true > -Dtests.locale=en-TC -Dtests.timezone=Europe/Budapest -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] FAILURE 0.01s J0 | RandomGeoPolygonTest.testCompareBigPolygons > \{seed=[2C88B3DA273BE2DF:5742535E2813B1BD]} <<< > [junit4] > Throwable #1: java.lang.AssertionError: Polygon failed to > build with an exception: > [junit4] > [[lat=1.5408708232037775E-28, lon=0.0([X=1.0011188539924791, > Y=0.0, Z=1.5425948326762136E-28])], [lat=-0.42051952071345244, > lon=-0.043956709579662245([X=0.912503274975597, Y=-0.04013649525500056, > Z=-0.40846219882801177])], [lat=0.6967302798374987, > lon=-1.5354076311466454([X=0.027128243251908137, Y=-0.7662593106632875, > Z=0.641541793498374])], [lat=0.6093302043457702, > lon=-1.5374202165648532([X=0.02736481119831758, Y=-0.8195876964154789, > Z=0.5723273145651325])], [lat=1.790840712772793E-12, > lon=4.742872761198669E-13([X=1.0011188539924791, Y=4.748179343323357E-13, > Z=1.792844402054173E-12])], [lat=-1.4523595845716656E-12, > lon=9.592326932761353E-13([X=1.0011188539924791, Y=9.603059346047237E-13, > Z=-1.4539845628913788E-12])], [lat=0.29556330360208455, > lon=1.5414988021120735([X=0.02804645884597515, Y=0.957023986775941, > Z=0.2915213382500179])]] > [junit4] > WKT:POLYGON((-2.5185339401969213 -24.093993739745027,0.0 > 8.828539494442529E-27,5.495998489568957E-11 > -8.321407453133E-11,2.7174659198424288E-11 > 1.0260761462208114E-10,88.32137548549387 > 16.934529875343248,-87.97237709688223 39.91970449365747,-88.0876897472551 > 34.91204903885665,-2.5185339401969213 -24.093993739745027)) > [junit4] > java.lang.IllegalArgumentException: Convex polygon has a > side that is more than 180 degrees > [junit4] > at > __randomizedtesting.SeedInfo.seed([2C88B3DA273BE2DF:5742535E2813B1BD]:0) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:163) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [junit4] > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [junit4] > at > java.base/java.lang.reflect.Method.invoke(Method.java:564) > [junit4] > at java.base/java.lang.Thread.run(Thread.java:844) > [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, > docValues:{}, maxPointsInLeafNode=1403, maxMBSortInHeap=5.306984579448146, > sim=RandomSimilarity(queryNorm=false): {}, locale=en-TC, > timezone=Europe/Budapest > [junit4] 2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 9.0.4 > (64-bit)/cpus=8,threads=1,free=296447064,total=536870912 > [junit4] 2> NOTE: All tests run in this JVM: [GeoPointTest, > GeoExactCircleTest, TestGeo3DDocValues, RandomGeoPolygonTest] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8451) GeoPolygon test failure
[ https://issues.apache.org/jira/browse/LUCENE-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577143#comment-16577143 ] ASF subversion and git services commented on LUCENE-8451: - Commit c6da1772e881b9fdd4f8d0704c23b8a38489c0b6 in lucene-solr's branch refs/heads/branch_7x from [~daddywri] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c6da177 ] LUCENE-8451: Interpret IllegalArgumentException result from convex polygon constructor as meaning a tiling failure. > GeoPolygon test failure > --- > > Key: LUCENE-8451 > URL: https://issues.apache.org/jira/browse/LUCENE-8451 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8451.patch > > > [junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareBigPolygons > -Dtests.seed=2C88B3DA273BE2DF -Dtests.multiplier=3 -Dtests.slow=true > -Dtests.locale=en-TC -Dtests.timezone=Europe/Budapest -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] FAILURE 0.01s J0 | RandomGeoPolygonTest.testCompareBigPolygons > \{seed=[2C88B3DA273BE2DF:5742535E2813B1BD]} <<< > [junit4] > Throwable #1: java.lang.AssertionError: Polygon failed to > build with an exception: > [junit4] > [[lat=1.5408708232037775E-28, lon=0.0([X=1.0011188539924791, > Y=0.0, Z=1.5425948326762136E-28])], [lat=-0.42051952071345244, > lon=-0.043956709579662245([X=0.912503274975597, Y=-0.04013649525500056, > Z=-0.40846219882801177])], [lat=0.6967302798374987, > lon=-1.5354076311466454([X=0.027128243251908137, Y=-0.7662593106632875, > Z=0.641541793498374])], [lat=0.6093302043457702, > lon=-1.5374202165648532([X=0.02736481119831758, Y=-0.8195876964154789, > Z=0.5723273145651325])], [lat=1.790840712772793E-12, > lon=4.742872761198669E-13([X=1.0011188539924791, Y=4.748179343323357E-13, > Z=1.792844402054173E-12])], [lat=-1.4523595845716656E-12, > lon=9.592326932761353E-13([X=1.0011188539924791, Y=9.603059346047237E-13, > Z=-1.4539845628913788E-12])], [lat=0.29556330360208455, > lon=1.5414988021120735([X=0.02804645884597515, Y=0.957023986775941, > Z=0.2915213382500179])]] > [junit4] > WKT:POLYGON((-2.5185339401969213 -24.093993739745027,0.0 > 8.828539494442529E-27,5.495998489568957E-11 > -8.321407453133E-11,2.7174659198424288E-11 > 1.0260761462208114E-10,88.32137548549387 > 16.934529875343248,-87.97237709688223 39.91970449365747,-88.0876897472551 > 34.91204903885665,-2.5185339401969213 -24.093993739745027)) > [junit4] > java.lang.IllegalArgumentException: Convex polygon has a > side that is more than 180 degrees > [junit4] > at > __randomizedtesting.SeedInfo.seed([2C88B3DA273BE2DF:5742535E2813B1BD]:0) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:163) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [junit4] > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [junit4] > at > java.base/java.lang.reflect.Method.invoke(Method.java:564) > [junit4] > at java.base/java.lang.Thread.run(Thread.java:844) > [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, > docValues:{}, maxPointsInLeafNode=1403, maxMBSortInHeap=5.306984579448146, > sim=RandomSimilarity(queryNorm=false): {}, locale=en-TC, > timezone=Europe/Budapest > [junit4] 2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 9.0.4 > (64-bit)/cpus=8,threads=1,free=296447064,total=536870912 > [junit4] 2> NOTE: All tests run in this JVM: [GeoPointTest, > GeoExactCircleTest, TestGeo3DDocValues, RandomGeoPolygonTest] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8451) GeoPolygon test failure
[ https://issues.apache.org/jira/browse/LUCENE-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577141#comment-16577141 ] ASF subversion and git services commented on LUCENE-8451: - Commit f2c0005e9d74208af5466e10a45e6c81d5be4770 in lucene-solr's branch refs/heads/master from [~daddywri] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f2c0005 ] LUCENE-8451: Interpret IllegalArgumentException result from convex polygon constructor as meaning a tiling failure. > GeoPolygon test failure > --- > > Key: LUCENE-8451 > URL: https://issues.apache.org/jira/browse/LUCENE-8451 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8451.patch > > > [junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareBigPolygons > -Dtests.seed=2C88B3DA273BE2DF -Dtests.multiplier=3 -Dtests.slow=true > -Dtests.locale=en-TC -Dtests.timezone=Europe/Budapest -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] FAILURE 0.01s J0 | RandomGeoPolygonTest.testCompareBigPolygons > \{seed=[2C88B3DA273BE2DF:5742535E2813B1BD]} <<< > [junit4] > Throwable #1: java.lang.AssertionError: Polygon failed to > build with an exception: > [junit4] > [[lat=1.5408708232037775E-28, lon=0.0([X=1.0011188539924791, > Y=0.0, Z=1.5425948326762136E-28])], [lat=-0.42051952071345244, > lon=-0.043956709579662245([X=0.912503274975597, Y=-0.04013649525500056, > Z=-0.40846219882801177])], [lat=0.6967302798374987, > lon=-1.5354076311466454([X=0.027128243251908137, Y=-0.7662593106632875, > Z=0.641541793498374])], [lat=0.6093302043457702, > lon=-1.5374202165648532([X=0.02736481119831758, Y=-0.8195876964154789, > Z=0.5723273145651325])], [lat=1.790840712772793E-12, > lon=4.742872761198669E-13([X=1.0011188539924791, Y=4.748179343323357E-13, > Z=1.792844402054173E-12])], [lat=-1.4523595845716656E-12, > lon=9.592326932761353E-13([X=1.0011188539924791, Y=9.603059346047237E-13, > Z=-1.4539845628913788E-12])], [lat=0.29556330360208455, > lon=1.5414988021120735([X=0.02804645884597515, Y=0.957023986775941, > Z=0.2915213382500179])]] > [junit4] > WKT:POLYGON((-2.5185339401969213 -24.093993739745027,0.0 > 8.828539494442529E-27,5.495998489568957E-11 > -8.321407453133E-11,2.7174659198424288E-11 > 1.0260761462208114E-10,88.32137548549387 > 16.934529875343248,-87.97237709688223 39.91970449365747,-88.0876897472551 > 34.91204903885665,-2.5185339401969213 -24.093993739745027)) > [junit4] > java.lang.IllegalArgumentException: Convex polygon has a > side that is more than 180 degrees > [junit4] > at > __randomizedtesting.SeedInfo.seed([2C88B3DA273BE2DF:5742535E2813B1BD]:0) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:163) > [junit4] > at > org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [junit4] > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [junit4] > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [junit4] > at > java.base/java.lang.reflect.Method.invoke(Method.java:564) > [junit4] > at java.base/java.lang.Thread.run(Thread.java:844) > [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, > docValues:{}, maxPointsInLeafNode=1403, maxMBSortInHeap=5.306984579448146, > sim=RandomSimilarity(queryNorm=false): {}, locale=en-TC, > timezone=Europe/Budapest > [junit4] 2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 9.0.4 > (64-bit)/cpus=8,threads=1,free=296447064,total=536870912 > [junit4] 2> NOTE: All tests run in this JVM: [GeoPointTest, > GeoExactCircleTest, TestGeo3DDocValues, RandomGeoPolygonTest] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8450) Enable TokenFilters to assign offsets when splitting tokens
[ https://issues.apache.org/jira/browse/LUCENE-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577137#comment-16577137 ] Uwe Schindler commented on LUCENE-8450: --- bq. I'm hoping we can avoid situations where the algorithm has to capture/restore a bunch of state: if we end out with that, then things haven't really gotten any better. If some decompounder needs that, I think it should keep the tracking internally. As it is called for each token in order, it can keep track of previous tokens internally (it can just clone attributes when its called and add to linked lists, wtf). No need to add that in the API. > Enable TokenFilters to assign offsets when splitting tokens > --- > > Key: LUCENE-8450 > URL: https://issues.apache.org/jira/browse/LUCENE-8450 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Mike Sokolov >Priority: Major > Attachments: offsets.patch > > > CharFilters and TokenFilters may alter token lengths, meaning that subsequent > filters cannot perform simple arithmetic to calculate the original > ("correct") offset of a character in the interior of the token. A similar > situation exists for Tokenizers, but these can call > CharFilter.correctOffset() to map offsets back to their original location in > the input stream. There is no such API for TokenFilters. > This issue calls for adding an API to support use cases like highlighting the > correct portion of a compound token. For example the german word > "außerstand" (meaning afaict "unable to do something") will be decompounded > and match "stand and "ausser", but as things are today, offsets are always > set using the start and end of the tokens produced by Tokenizer, meaning that > highlighters will match the entire compound. > I'm proposing to add this method to `TokenStream`: > {{ public CharOffsetMap getCharOffsetMap();}} > referencing a CharOffsetMap with these methods: > {{ int correctOffset(int currentOff);}} > {{ int uncorrectOffset(int originalOff);}} > > The uncorrectOffset method is a pseudo-inverse of correctOffset, mapping from > original offset forward to the current "offset space". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-8453. --- Resolution: Fixed Thanks [~Tomoko Uchida]! > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577135#comment-16577135 ] ASF subversion and git services commented on LUCENE-8453: - Commit d8ecf976124eb519e1f8c66e6749e246976a95d9 in lucene-solr's branch refs/heads/branch_7x from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8ecf97 ] Merge branch 'jira/lucene-8453' of https://github.com/mocobeta/lucene-solr-mirror LUCENE-8453: Add documentation to analysis factories of Korean (Nori) analyzer module This closes #434 > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #434: Add example schema settings for Korean analyz...
Github user asfgit closed the pull request at: https://github.com/apache/lucene-solr/pull/434 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577134#comment-16577134 ] ASF subversion and git services commented on LUCENE-8453: - Commit e9addea0871a28517c5202e9d12969719d20c90e in lucene-solr's branch refs/heads/master from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e9addea ] Merge branch 'jira/lucene-8453' of https://github.com/mocobeta/lucene-solr-mirror LUCENE-8453: Add documentation to analysis factories of Korean (Nori) analyzer module This closes #434 > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577133#comment-16577133 ] Uwe Schindler commented on LUCENE-8453: --- Another idea: To make the propertie sof all analyzers easily available for inspection by the APIs in Solr, we may add runtime annotations to those classes, describing the properties. Just an idea. > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577132#comment-16577132 ] Uwe Schindler commented on LUCENE-8453: --- No problem. I merged already. Just running document-linter to verify correctness of Javadocs. > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577130#comment-16577130 ] Tomoko Uchida commented on LUCENE-8453: --- Thank you [~thetaphi] ! and, thanks for your explanation. {quote}There is an issue open already (I think, can't find it now). I agree, the XML snippets should go away. Instead we can add some Javadoc tag for this like @factoryProp name description. This is much better. We should also document the SPI name of each factory. {quote} > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-8453: -- Affects Version/s: 7.4 > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577125#comment-16577125 ] Uwe Schindler commented on LUCENE-8453: --- +1. I will merge it soon! bq. Slightly off topic, feel free to ignore, but I think Solr example settings should be removed from TokenizerFactory/TokenFilterFactory/CharFilterFactory documentation. I suppose there may be historical reasons, so I followed the convention, but it is not reasonable to add Solr schema examples here. Not XML schema examples, but parameter descriptions are needed to each Factory documentation. There is an issue open already (I think, can't find it now). I agree, the XML snippets should go away. Instead we can add some Javadoc tag for this like {{@factoryProp name description}}. This is much better. We should also document the SPI name of each factory. > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-8453: -- Fix Version/s: 7.5 master (8.0) > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reassigned LUCENE-8453: - Assignee: Uwe Schindler > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Affects Versions: 7.4 >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Minor > Fix For: master (8.0), 7.5 > > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin
[ https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577115#comment-16577115 ] Andrzej Bialecki commented on SOLR-12526: -- Patches are welcome - unfortunately I know very little about how these requests are authenticated - I expected that the HTTPClient instance this handler uses is already properly pre-configured to use any necessary auth - this HTTPClient comes from {{coreContainer.getUpdateShardHandler().getDefaultHTTPClient().}} > Metrics History doesn't work with AuthenticationPlugin > -- > > Key: SOLR-12526 > URL: https://issues.apache.org/jira/browse/SOLR-12526 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication, metrics >Affects Versions: 7.4 >Reporter: Michal Hlavac >Priority: Critical > > Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make > http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since > its enabled by default, there are errors in log every time > {{MetricsHistoryHandler}} tries to collect data. > {code:java} > org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error > from server at http://172.20.0.5:8983/solr: Expected mime type > application/octet-stream but got text/html. > > > Error 401 require authentication > > HTTP ERROR 401 > Problem accessing /solr/admin/metrics. Reason: > require authentication > > > at > org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607) > ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) > ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) > ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) > ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292) > ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:1 > 4] > at > org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150) > [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199) > [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 > 16:55:14] > at > org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76) > [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111) > [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:14] > at > org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495) > [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:13] > at > org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368) > [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:13] > at > org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230) > [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - > jpountz - 2018-06-18 16:55:13] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) > [?:?] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) > [?:?] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) > [?:?] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > [?:?] > at java.lang.Thread.run(Thread.java:844) [?:?] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577109#comment-16577109 ] Bar Rotstein edited comment on SOLR-12591 at 8/11/18 9:59 AM: -- There is another small detail that I am not sure about. In ExtractionDateUtil the date is added to the document with type Instance, while the URP adds the date as a Date type. The toString() method in each type slightly differ. Is it OK to save the parsed fields as a Date, even though the XML returned by them is different? (since ExtractionDateUtil saves a String representation of Instant?) Perhaps this could be addressed in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593], where we could use the .toInstant() method of the Date object in some way, to mimic the behavior found in SolrContentLoader? {code:java} Instant date = ExtractionDateUtil.parseDate(val, dateFormats); // may throw result = date.toString();//ISO format {code} was (Author: brot): There is another small detail that I am not sure about. In DateExtractionUtil the date is added to the document with type Instance, while the URP adds the date as a Date type. The toString() method in each type slightly differ. Is it OK to save the parsed fields as a Date, even though the XML returned by them is different? (since DateExtractionUtil saves a String representation of Instant?) Perhaps this could be addressed in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593], where we could use the .toInstant() method of the Date object in some way, to mimic the behavior found in SolrContentLoader? {code:java} Instant date = ExtractionDateUtil.parseDate(val, dateFormats); // may throw result = date.toString();//ISO format {code} > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577109#comment-16577109 ] Bar Rotstein edited comment on SOLR-12591 at 8/11/18 9:58 AM: -- There is another small detail that I am not sure about. In DateExtractionUtil the date is added to the document with type Instance, while the URP adds the date as a Date type. The toString() method in each type slightly differ. Is it OK to save the parsed fields as a Date, even though the XML returned by them is different? (since DateExtractionUtil saves a String representation of Instant?) Perhaps this could be addressed in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593], where we could use the .toInstant() method of the Date object in some way, to mimic the behavior found in SolrContentLoader? {code:java} Instant date = ExtractionDateUtil.parseDate(val, dateFormats); // may throw result = date.toString();//ISO format {code} was (Author: brot): There is another small detail that I am not sure about. In DateExtractionUtil the date is added to the document with type Instance, while the URP adds the date as a Date type. The toString() method in each type slightly differ. Is it OK to save the parsed fields as a Date, even though the XML return by them is different? Perhaps this could be addressed in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593], where we could use the .toInstant() method of the Date object in some way, to mimic the behavior found in SolrContentLoader? {code:java} Instant date = ExtractionDateUtil.parseDate(val, dateFormats); // may throw result = date.toString();//ISO format {code} > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil
[ https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577109#comment-16577109 ] Bar Rotstein commented on SOLR-12591: - There is another small detail that I am not sure about. In DateExtractionUtil the date is added to the document with type Instance, while the URP adds the date as a Date type. The toString() method in each type slightly differ. Is it OK to save the parsed fields as a Date, even though the XML return by them is different? Perhaps this could be addressed in [SOLR-12593|https://issues.apache.org/jira/browse/SOLR-12593], where we could use the .toInstant() method of the Date object in some way, to mimic the behavior found in SolrContentLoader? {code:java} Instant date = ExtractionDateUtil.parseDate(val, dateFormats); // may throw result = date.toString();//ISO format {code} > Ensure ParseDateFieldUpdateProcessorFactory can be used instead of > ExtractionDateUtil > - > > Key: SOLR-12591 > URL: https://issues.apache.org/jira/browse/SOLR-12591 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > > ParseDateFieldUpdateProcessorFactory should ideally be able to handle the > cases that ExtractionDateUtil does in the "extraction" contrib module. Tests > should be added, ported from patches in SOLR-12561 that enhance > TestExtractionDateUtil to similarly ensure the URP is tested. I think in > this issue, I should switch out Joda time for java.time as well (though leave > the complete removal for SOLR-12586) if it any changes are actually necessary > – they probably will be. > Once this issue is complete, it should be appropriate to gut date time > parsing out of the "extraction" contrib module – a separate issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2012 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2012/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay Error Message: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:64296 within 3 ms Stack Trace: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:64296 within 3 ms at __randomizedtesting.SeedInfo.seed([AE8F2F31654D1780:D11198B40C2F3A0A]:0) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:184) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:121) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:103) at org.apache.solr.cloud.AbstractZkTestCase.tryCleanPath(AbstractZkTestCase.java:180) at org.apache.solr.cloud.AbstractZkTestCase.tryCleanSolrZkNode(AbstractZkTestCase.java:176) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:74) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay(ZkStateReaderTest.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577106#comment-16577106 ] Tomoko Uchida commented on LUCENE-8453: --- I think this pull request is almost ready to merge. Could anyone take care this? I believe documentation for analyzer components is very important & a good starting point to newbies. :) > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Reporter: Tomoko Uchida >Priority: Minor > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2532 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2532/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection Error Message: Error from server at https://127.0.0.1:39971/solr: Could not find collection : testDeleteWithCollection_abc Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:39971/solr: Could not find collection : testDeleteWithCollection_abc at __randomizedtesting.SeedInfo.seed([EA8CE8F95A07E5F:75713962CC3AACE1]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection(TestWithCollection.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-8453) Add example settings to Korean analyzer components' javadocs
[ https://issues.apache.org/jira/browse/LUCENE-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577100#comment-16577100 ] Tomoko Uchida commented on LUCENE-8453: --- I've tested those three settings with Solr 7.4.0, works for me. (I copied `lucene-analyzers-nori-7.4.0.jar` and user dictionary file from lucene distribution package to solr lib directory.) > Add example settings to Korean analyzer components' javadocs > > > Key: LUCENE-8453 > URL: https://issues.apache.org/jira/browse/LUCENE-8453 > Project: Lucene - Core > Issue Type: Improvement > Components: general/javadocs >Reporter: Tomoko Uchida >Priority: Minor > > Korean analyzer (nori) javadoc needs example schema settings. > I'll create a patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org