[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 357 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/357/ All tests passed Build Log: [...truncated 14616 lines...] [junit4] JVM J1: stdout was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/checkout/solr/build/solr-core/test/temp/junit4-J1-20181025_042811_318856692925594169.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded [junit4] Dumping heap to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/checkout/heapdumps/java_pid11209.hprof ... [junit4] Heap dump file created [584141158 bytes in 11.301 secs] [junit4] <<< JVM J1: EOF [junit4] JVM J1: stderr was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/checkout/solr/build/solr-core/test/temp/junit4-J1-20181025_042811_3183568660724304540273.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] WARN: Unhandled exception in event serialization. -> java.lang.OutOfMemoryError: GC overhead limit exceeded [junit4] at java.nio.CharBuffer.wrap(CharBuffer.java:373) [junit4] at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:265) [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135) [junit4] at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220) [junit4] at java.io.Writer.write(Writer.java:157) [junit4] at com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.newline(JsonWriter.java:584) [junit4] at com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.beforeName(JsonWriter.java:599) [junit4] at com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.writeDeferredName(JsonWriter.java:397) [junit4] at com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.value(JsonWriter.java:413) [junit4] at com.carrotsearch.ant.tasks.junit4.events.AbstractEvent.writeBinaryProperty(AbstractEvent.java:36) [junit4] at com.carrotsearch.ant.tasks.junit4.events.AppendStdErrEvent.serialize(AppendStdErrEvent.java:30) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:129) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:124) [junit4] at java.security.AccessController.doPrivileged(Native Method) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:124) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:98) [junit4] at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:498) [junit4] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) [junit4] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) [junit4] at java.io.PrintStream.flush(PrintStream.java:338) [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) [junit4] at java.io.PrintStream.write(PrintStream.java:482) [junit4] at org.apache.logging.log4j.core.util.CloseShieldOutputStream.write(CloseShieldOutputStream.java:53) [junit4] at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:263) [junit4] at org.apac [junit4] he.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:295) [junit4] at org.apache.logging.log4j.core.appender.OutputStreamManager.flush(OutputStreamManager.java:304) [junit4] at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:179) [junit4] at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170) [junit4] at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161) [junit4] at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) [junit4] at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129) [junit4] <<< JVM J1: EOF [...truncated 1346 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: /usr/local/asfpackages/java/jdk1.8.0_172/jre/bin/java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/checkout/heapdumps -ea -esa -Dtests.prefix=tests -Dtests.seed=5596D43CE75126CC -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random
[jira] [Created] (SOLR-12920) Resurrect regular test beasting reports.
Mark Miller created SOLR-12920: -- Summary: Resurrect regular test beasting reports. Key: SOLR-12920 URL: https://issues.apache.org/jira/browse/SOLR-12920 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mark Miller Assignee: Mark Miller I built BeastIt as a large distributed system that could work with any project. Let's do this again with something single node scale and very specific to our project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9422) JavbinCodec should write Bytesref as byte[]
[ https://issues.apache.org/jira/browse/SOLR-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-9422. -- Resolution: Won't Fix {{BytesRef}} is {{String}} most of the times > JavbinCodec should write Bytesref as byte[] > --- > > Key: SOLR-9422 > URL: https://issues.apache.org/jira/browse/SOLR-9422 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Priority: Trivial > > This way we can get rid of the custom ObjectResolver in {{TransactionLog}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-12-ea+12) - Build # 7583 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7583/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC 27 tests failed. FAILED: org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence Error Message: Software caused connection abort: recv failed Stack Trace: javax.net.ssl.SSLProtocolException: Software caused connection abort: recv failed at __randomizedtesting.SeedInfo.seed([563ED7FB92B8385B:2354A4252D44411E]:0) at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:126) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259) at java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215) at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107) at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226) at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192) at org.apache.solr.ltr.store.rest.TestModelManagerPersistence.doWrapperModelPersistenceChecks(TestModelManagerPersistence.java:202) at org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence(TestModelManagerPersistence.java:255) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23089 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23089/ Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([AB4A85D3378889D0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([AB4A85D3378889D0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[GitHub] lucene-solr pull request #455: SOLR-12638
Github user moshebla commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/455#discussion_r228032204 --- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java --- @@ -262,6 +263,11 @@ private void flattenAnonymous(List unwrappedDocs, SolrInputDo flattenAnonymous(unwrappedDocs, currentDoc, false); } + public String getRouteFieldVal() { --- End diff -- After reading through the code it seems like _route_ is currently only used in delete commands and queries. This can be seen in DistributedUpdateProceesor#doDeleteByQuery `ModifiableSolrParams outParams = new ModifiableSolrParams(filterParams(req.getParams())); outParams.set(DISTRIB_UPDATE_PARAM, DistribPhase.TOLEADER.toString()); outParams.set(DISTRIB_FROM, ZkCoreNodeProps.getCoreUrl( zkController.getBaseUrl(), req.getCore().getName())); SolrParams params = req.getParams(); String route = params.get(ShardParams._ROUTE_); Collection slices = coll.getRouter().getSearchSlices(route, params, coll); List leaders = new ArrayList<>(slices.size()); for (Slice slice : slices) { String sliceName = slice.getName(); Replica leader; try { leader = zkController.getZkStateReader().getLeaderRetry(collection, sliceName); } catch (InterruptedException e) { throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "Exception finding leader for shard " + sliceName, e); }` Perhaps this should be moved to addUpdateCommand(perhaps even UpdateCommand) or to a method that will be accessible and visible so we do not have this confusion in the future? --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11868) CloudSolrClient.setIdField is confusing, it's really the routing field. Should be deprecated.
[ https://issues.apache.org/jira/browse/SOLR-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663225#comment-16663225 ] Erick Erickson commented on SOLR-11868: --- Actually, I've long thought that allowing the to be something besides "id" is more trouble than it's worth, so AFAIC, standardizing would be fine. > CloudSolrClient.setIdField is confusing, it's really the routing field. > Should be deprecated. > - > > Key: SOLR-11868 > URL: https://issues.apache.org/jira/browse/SOLR-11868 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.2 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > IIUC idField has nothing to do with the field. It's really > the field used to route documents. Agreed, this is often the "id" > field, but still > In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" > field to router.getTargetSlice() and just works, even though > getTargetSlice is clearly designed to route on a field other than the > if we didn't just pass null as the "route" param. > The confusing bit is that if I have a route field defined for my > collection and want to use CloudSolrClient I have to figure out that I > need to use the setIdField method to use that field for routing. > > We should deprecate setIdField and refactor how this is used (i.e. > getRoutes). Need to beef up tests too I suspect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #455: SOLR-12638
Github user dsmiley commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/455#discussion_r228026284 --- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java --- @@ -262,6 +263,11 @@ private void flattenAnonymous(List unwrappedDocs, SolrInputDo flattenAnonymous(unwrappedDocs, currentDoc, false); } + public String getRouteFieldVal() { --- End diff -- I'm skeptical of this method. It's name seems innocent enough looking at the code here. But then also consider some collections have a "router.field" and this method is named in such a way that one would think this method returns that field's value... yet it does not. Some callers put this into a variable named "id" or similar. Given that, I propose you remove it but incorporate the logic into getHashableId which seems the proper place for it. It _is_ the hashable Id... the hashable ID of a nested doc is it's root. But... I do also wonder if we need this at all. Somewhere Solr already has code that looks at \_route\_ and acts on that if present. Perhaps the code path for an atomic update isn't doing this yet but should do it? Then we wouldn't need this change to AddUpdateCommand. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8544) In SpanNearQuery, add support for inOrder semantics equivalent to that of (Multi)PhraseQuery
[ https://issues.apache.org/jira/browse/LUCENE-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662390#comment-16662390 ] Michael Gibney edited comment on LUCENE-8544 at 10/25/18 2:57 AM: -- I can't immediately speak to the possibility of adding this functionality to the existing implementation of {{SpanNearQuery}}, but building on a pending patch for LUCENE-7398, I think it might actually be pretty straightforward. The above-referenced patch makes {{NearSpansOrdered}} aware of indexed {{PositionLengthAttribute}}. In a positionLength-aware context, it wasn't clear to me how to port the {{NearSpansOrdered}} changes to {{NearSpansUnordered}}; there were a number of ways to interpret the task, all of which looked pretty complicated and/or messy and/or difficult-verging-on-impossible to implement in a performant way (and at a higher level, they all seemed a bit semantically weird). But positionLength-aware implementation of {{(Multi)PhraseQuery}} semantics in the context of the above-referenced patch should be much simpler: given that you have a fixed clause ordering, it just requires supporting negative offsets in calculation of slop/edit distance. was (Author: mgibney): I can't immediately speak to the possibility of adding this functionality to the existing implementation of {{SpanNearQuery}}, but building on an outstanding patch for [LUCENE-7398|https://issues.apache.org/jira/browse/LUCENE-7398?focusedCommentId=16630529#comment-16630529], I think it might actually be pretty straightforward. The above-referenced patch makes {{NearSpansOrdered}} aware of indexed {{PositionLengthAttribute}}. In a positionLength-aware context, it wasn't clear to me how to port the {{NearSpansOrdered}} changes to {{NearSpansUnordered}}; there were a number of ways to interpret the task, all of which looked pretty complicated and/or messy and/or difficult-verging-on-impossible to implement in a performant way (and at a higher level, they all seemed a bit semantically weird). But positionLength-aware implementation of {{(Multi)PhraseQuery}} semantics in the context of the above-referenced patch should be much simpler: given that you have a fixed clause ordering, it just requires supporting negative offsets in calculation of slop/edit distance. > In SpanNearQuery, add support for inOrder semantics equivalent to that of > (Multi)PhraseQuery > > > Key: LUCENE-8544 > URL: https://issues.apache.org/jira/browse/LUCENE-8544 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Michael Gibney >Priority: Minor > > As discussed in LUCENE-8531, the semantics of phrase search differs among > {{(Multi)PhraseQuery}}, {{SpanNearQuery (inOrder=true)}}, and {{SpanNearQuery > (inOrder=false)}}: > * {{(Multi)PhraseQuery}}: incorporates the concept of order, and allows > negative offsets in calculating slop/edit distance > * {{SpanNearQuery (inOrder=true)}}: incorporates the concept of order, and > does _not_ allow negative offsets in calculating slop/edit distance > * {{SpanNearQuery (inOrder=false)}}: does not incorporate the concept of > order at all > This issue concerns the possibility of adjusting {{SpanNearQuery}} to be > configurable to support semantics equivalent to that of > {{(Multi)PhraseQuery}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11868) CloudSolrClient.setIdField is confusing, it's really the routing field. Should be deprecated.
[ https://issues.apache.org/jira/browse/SOLR-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663117#comment-16663117 ] David Smiley commented on SOLR-11868: - I wonder if we even need this setting if cluster state has the route field? If it doesn't, it's the uniqueKey. Separately, we could standardize on "id" for uniqueKey and/or have the router field default in cluster state on creation to be what the uniqueKey is. > CloudSolrClient.setIdField is confusing, it's really the routing field. > Should be deprecated. > - > > Key: SOLR-11868 > URL: https://issues.apache.org/jira/browse/SOLR-11868 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.2 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > IIUC idField has nothing to do with the field. It's really > the field used to route documents. Agreed, this is often the "id" > field, but still > In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" > field to router.getTargetSlice() and just works, even though > getTargetSlice is clearly designed to route on a field other than the > if we didn't just pass null as the "route" param. > The confusing bit is that if I have a route field defined for my > collection and want to use CloudSolrClient I have to figure out that I > need to use the setIdField method to use that field for routing. > > We should deprecate setIdField and refactor how this is used (i.e. > getRoutes). Need to beef up tests too I suspect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8509) NGramTokenizer, TrimFilter and WordDelimiterGraphFilter in combination can produce backwards offsets
[ https://issues.apache.org/jira/browse/LUCENE-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663115#comment-16663115 ] Michael Gibney commented on LUCENE-8509: > The trim filter removes the leading space from the second token, leaving > offsets unchanged, so WDGF sees "bb"[1,4]; If I understand correctly what [~dsmiley] is saying, then to put it another way: doesn't this look more like an issue with {{TrimFilter}}? If WDGF sees as input from {{TrimFilter}} "bb"[1,4] (instead of " bb"[1,4] or "bb"[2,4]), then it's handling the input correctly, but the input is wrong. "because tokenization splits offsets and WDGF is playing the role of a tokenizer" -- this behavior is notably different from what {{SynonymGraphFilter}} does (adding externally-specified alternate representations of input tokens). Offsets are really only meaningful with respect to input, and new tokens introduced by WDGF are directly derived from input, while new tokens introduced by {{SynonymGraphFilter}} are not and thus can _only_ inherit offsets of the input token. > NGramTokenizer, TrimFilter and WordDelimiterGraphFilter in combination can > produce backwards offsets > > > Key: LUCENE-8509 > URL: https://issues.apache.org/jira/browse/LUCENE-8509 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8509.patch > > > Discovered by an elasticsearch user and described here: > https://github.com/elastic/elasticsearch/issues/33710 > The ngram tokenizer produces tokens "a b" and " bb" (note the space at the > beginning of the second token). The WDGF takes the first token and splits it > into two, adjusting the offsets of the second token, so we get "a"[0,1] and > "b"[2,3]. The trim filter removes the leading space from the second token, > leaving offsets unchanged, so WDGF sees "bb"[1,4]; because the leading space > has already been stripped, WDGF sees no need to adjust offsets, and emits the > token as-is, resulting in the start offsets of the tokenstream being [0, 2, > 1], and the IndexWriter rejecting it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 875 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/875/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream Error Message: expected:<5> but was:<0> Stack Trace: java.lang.AssertionError: expected:<5> but was:<0> at __randomizedtesting.SeedInfo.seed([99B65AB16185692D:B95C38B1FDC48461]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream(StreamDecoratorTest.java:3035) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 16271 lines...] [junit4] Suite:
[jira] [Created] (SOLR-12919) Introduce a new set of Mock objects that cover key parts of the system and are easy to use.
Mark Miller created SOLR-12919: -- Summary: Introduce a new set of Mock objects that cover key parts of the system and are easy to use. Key: SOLR-12919 URL: https://issues.apache.org/jira/browse/SOLR-12919 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mark Miller Assignee: Mark Miller -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12793) Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package
[ https://issues.apache.org/jira/browse/SOLR-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker resolved SOLR-12793. -- Resolution: Fixed Fix Version/s: master (8.0) 7.6 > Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package > -- > > Key: SOLR-12793 > URL: https://issues.apache.org/jira/browse/SOLR-12793 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Trivial > Fix For: 7.6, master (8.0) > > > This proposal is to move the the following two classes to the facet test > package ( org.apache.solr.search.facet ) > - TestCloudJSONFacetJoinDomain > - TestCloudJSONFacetSKG -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 191 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/191/ 6 tests failed. FAILED: org.apache.solr.cloud.TestStressInPlaceUpdates.stressTest Error Message: Error from server at https://127.0.0.1:35235/bmi/ik: Could not load collection from ZK: collection1 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:35235/bmi/ik: Could not load collection from ZK: collection1 at __randomizedtesting.SeedInfo.seed([6A3A97A29BD67DA0:15C480FA503A95A]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:392) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23088 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23088/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC 11 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:46617/solr Stack Trace: java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:46617/solr at __randomizedtesting.SeedInfo.seed([49E73E48B4A22477:881747E499F2EED0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:32879/solr Stack Trace:
[JENKINS] Lucene-Solr-Tests-master - Build # 2898 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2898/ 2 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime Error Message: Error from server at https://127.0.0.1:46383/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:46383/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([30DBE4D0DC92D9DB:DE039F4192ED0C48]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663047#comment-16663047 ] Mark Miller commented on SOLR-12801: We currently have a complete mix of integration and unit tests. People add one or the other based on ease, usually just integration tests for a lot of stuff, maybe a test is even a hybrid of both. I think it would help a lot to separate out unit tests and integration tests - you should be able to run them separately and they should live in different paths. We should encourage developers to write both for new work, and push back for reasons when one or the other is left out. Part of making that a full reality will mean making it much easier to write unit tests, but we could start organizing. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8545) Precommit could check for any tests in the latest commit and do a cursory beasting.
Mark Miller created LUCENE-8545: --- Summary: Precommit could check for any tests in the latest commit and do a cursory beasting. Key: LUCENE-8545 URL: https://issues.apache.org/jira/browse/LUCENE-8545 Project: Lucene - Core Issue Type: Test Reporter: Mark Miller Many tests are not currently beastable with the build beast target because they are not made in a way that they can be called repeatedly with a clean env. We should consider letting precommit lightly beast any test in the last commit - this will ensure tests can be beasted with the build beaster and gives a little protection against some of the really flakey tests that get committed. I have a little prototype that still needs a bit of work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2122 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2122/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC 5 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([5307ADA504CA5A78]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([5307ADA504CA5A78]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Created] (SOLR-12918) Reproducing test failure in TestHighFrequencyDictionaryFactory
Erick Erickson created SOLR-12918: - Summary: Reproducing test failure in TestHighFrequencyDictionaryFactory Key: SOLR-12918 URL: https://issues.apache.org/jira/browse/SOLR-12918 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: spellchecker Affects Versions: master (8.0) Reporter: Erick Erickson ant test -Dtestcase=TestHighFrequencyDictionaryFactory -Dtests.method=testDefault -Dtests.seed=9F7E1447BD84930F -Dtests.slow=true -Dtests.locale= -Dtests.timezone=Europe/Oslo -Dtests.asserts=true -Dtests.file.encoding=UTF-8 I suspect this has been around forever, I found it when backporting to 5x and it still reproduces in master. ERROR 0.00s | TestHighFrequencyDictionaryFactory (suite) <<< [junit4]> Throwable #1: java.util.IllformedLocaleException: Empty subtag [at index 0] [junit4]>at __randomizedtesting.SeedInfo.seed([9F7E1447BD84930F]:0) [junit4]>at java.util.Locale$Builder.setLanguageTag(Locale.java:2426) [junit4]>at org.apache.lucene.util.LuceneTestCase.localeForLanguageTag(LuceneTestCase.java:1606) [junit4]>at java.lang.Thread.run(Thread.java:748) [junit4] Completed [1/1 (1!)] in 2.26s, 0 tests, 1 error <<< FAILURES! [junit4] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12829) Add plist (parallel list) Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662983#comment-16662983 ] Joel Bernstein commented on SOLR-12829: --- Yeah, I typically do that last. I'm about to commit the docs for this as well, and then I'll add the CHANGES entry and close the ticket. > Add plist (parallel list) Streaming Expression > -- > > Key: SOLR-12829 > URL: https://issues.apache.org/jira/browse/SOLR-12829 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-12829.patch, SOLR-12829.patch > > > The *plist* Streaming Expression wraps any number of streaming expressions > and opens them in parallel. The results of each of the streams are then > iterated in the order they appear in the list. Since many streams perform > heavy pushed down operations when opened, like the FacetStream, this will > result in the parallelization of these operations. For example plist could > wrap several facet() expressions and open them each in parallel, which would > cause the facets to be run in parallel, on different replicas in the cluster. > Here is sample syntax: > {code:java} > plist(tuple(facet1=facet(...)), > tuple(facet2=facet(...)), > tuple(facet3=facet(...))) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2972 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2972/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 30 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([824CDEE1DCBB5F93]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([824CDEE1DCBB5F93]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (LUCENE-8509) NGramTokenizer, TrimFilter and WordDelimiterGraphFilter in combination can produce backwards offsets
[ https://issues.apache.org/jira/browse/LUCENE-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662897#comment-16662897 ] David Smiley commented on LUCENE-8509: -- bq. The trim filter removes the leading space from the second token, leaving offsets unchanged That sounds fishy though; shouldn't they be trivially adjusted? I'm skeptical about your proposal RE WDGF being an improvement because tokenization splits offsets and WDGF is playing the role of a tokenizer. Perhaps your proposal could be a new option that perhaps even defaults the way you want it? And we solicit feedback/input saying the ability to toggle may go away. The option's default setting should probably be Version-dependent. > NGramTokenizer, TrimFilter and WordDelimiterGraphFilter in combination can > produce backwards offsets > > > Key: LUCENE-8509 > URL: https://issues.apache.org/jira/browse/LUCENE-8509 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8509.patch > > > Discovered by an elasticsearch user and described here: > https://github.com/elastic/elasticsearch/issues/33710 > The ngram tokenizer produces tokens "a b" and " bb" (note the space at the > beginning of the second token). The WDGF takes the first token and splits it > into two, adjusting the offsets of the second token, so we get "a"[0,1] and > "b"[2,3]. The trim filter removes the leading space from the second token, > leaving offsets unchanged, so WDGF sees "bb"[1,4]; because the leading space > has already been stripped, WDGF sees no need to adjust offsets, and emits the > token as-is, resulting in the start offsets of the tokenstream being [0, 2, > 1], and the IndexWriter rejecting it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12772) Correct dotProduct parameters+syntax documentation
[ https://issues.apache.org/jira/browse/SOLR-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662894#comment-16662894 ] Joel Bernstein commented on SOLR-12772: --- Yep, the docs were incorrect. Thanks for fixing! > Correct dotProduct parameters+syntax documentation > -- > > Key: SOLR-12772 > URL: https://issues.apache.org/jira/browse/SOLR-12772 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12772.patch > > > Stumbled across it being documented as {{dotProduct(numericArray)}} when it > should be {{dotProduct(numericArray, numericArray)}} - no? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12793) Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package
[ https://issues.apache.org/jira/browse/SOLR-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662892#comment-16662892 ] ASF subversion and git services commented on SOLR-12793: Commit dbdc2547fd6303a2a768aea538bebe8130a16f7e in lucene-solr's branch refs/heads/branch_7x from [~varunthacker] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dbdc254 ] SOLR-12793: Move TestCloudJSONFacetJoinDomain amd TestCloudJSONFacetSKG to the facet test package (cherry picked from commit 71988c756b76a96cb96fc8f86183219a4a008389) > Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package > -- > > Key: SOLR-12793 > URL: https://issues.apache.org/jira/browse/SOLR-12793 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Trivial > > This proposal is to move the the following two classes to the facet test > package ( org.apache.solr.search.facet ) > - TestCloudJSONFacetJoinDomain > - TestCloudJSONFacetSKG -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter
[ https://issues.apache.org/jira/browse/SOLR-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12895: Component/s: highlighter > SurroundQParserPlugin support for UnifiedHighlighter > > > Key: SOLR-12895 > URL: https://issues.apache.org/jira/browse/SOLR-12895 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Attachments: SOLR-12895.patch, SOLR-12895.patch > > > The "surround" QParser doesn't work with the UnififedHighlighter -- > LUCENE-8492. However I think we can overcome this by having Solr's QParser > extend getHighlightQuery and rewrite itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets
[ https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662876#comment-16662876 ] Tim Underwood commented on SOLR-12875: -- I added a simple test to the PR that triggers the bug. It looks like it does only show up only for DVHASH and not the other methods (at least for my very simple test case). {noformat} [junit4] Completed [1/1 (1!)] in 8.17s, 6 tests, 1 error <<< FAILURES! [junit4] [junit4] [junit4] Tests with failures [seed: 5B903C907B0E693C]: [junit4] - org.apache.solr.search.facet.TestJsonFacets.testUniqueBlock {p0=DVHASH} [junit4] [junit4] [junit4] JVM J0: 0.60 ..11.38 =10.77s [junit4] Execution time total: 11 seconds [junit4] Tests summary: 1 suite, 6 tests, 1 error {noformat} Here is the stack trace from the error: {noformat} [junit4] ERROR 0.04s | TestJsonFacets.testUniqueBlock {p0=DVHASH} <<< [junit4]> Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 2 [junit4]>at __randomizedtesting.SeedInfo.seed([5B903C907B0E693C:7CD3954BB5901FE7]:0) [junit4]>at org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40) [junit4]>at org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:433) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:51) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV$4.collect(FacetFieldProcessorByHashDV.java:369) [junit4]>at org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:346) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:248) [junit4]>at org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:215) [junit4]>at org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) {noformat} > ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets > > > Key: SOLR-12875 > URL: https://issues.apache.org/jira/browse/SOLR-12875 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Affects Versions: 7.5 >Reporter: Tim Underwood >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some > requests when trying to make use of > {noformat} > uniqueBlock(_root_){noformat} > within JSON Facets. > Here are some example Stack Traces: > {noformat} > 2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [ x:my_core] > o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 > out of bounds for length 8 > at > org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40) > at > org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85) > at > org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395) > at > org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472) > at >
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662860#comment-16662860 ] Varun Thacker commented on SOLR-12801: -- {quote}I want to start having a place for unit tests and integration tests - maybe we can work on both at once? {quote} I didn't quite follow this. Is there anything particular that you wanted me to look into while i'm working on SOLR-12793 . Happy to help move some tests to their own packages if that would help here > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23087 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23087/ Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseG1GC 30 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([911A07E78CD221C8]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([911A07E78CD221C8]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Updated] (SOLR-12917) Improve CDCR Tests; create factory methods to avoid redundancy
[ https://issues.apache.org/jira/browse/SOLR-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12917: Description: There are multiple methods in *CdcrBidirectionalTest* introduced SOLR-11003, can be used to create tests for other improvements like SOLR-12057, SOLR-12071 where we need to test multiple combinations of NRT, TLOG and PULL replicas. This JIRA tracks refactoring of test classes to create common framework methods, and improve the redundant code and remove thread sleeps if not necessary. was: There are multiple methods in *_CdcrBidirectionalTest_* introduced SOLR-11003, can be used to create tests for other improvements like SOLR-12057, SOLR-12071 where we need to test multiple combinations of NRT, TLOG and PULL replicas. This JIRA tracks refactoring of test classes to create common framework methods, and improve the redundant code and remove thread sleeps if not necessary. > Improve CDCR Tests; create factory methods to avoid redundancy > -- > > Key: SOLR-12917 > URL: https://issues.apache.org/jira/browse/SOLR-12917 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, Tests >Affects Versions: master (8.0) >Reporter: Amrit Sarkar >Priority: Minor > > There are multiple methods in *CdcrBidirectionalTest* introduced SOLR-11003, > can be used to create tests for other improvements like SOLR-12057, > SOLR-12071 where we need to test multiple combinations of NRT, TLOG and PULL > replicas. > This JIRA tracks refactoring of test classes to create common framework > methods, and improve the redundant code and remove thread sleeps if not > necessary. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12917) Improve CDCR Tests; create factory methods to avoid redundancy
Amrit Sarkar created SOLR-12917: --- Summary: Improve CDCR Tests; create factory methods to avoid redundancy Key: SOLR-12917 URL: https://issues.apache.org/jira/browse/SOLR-12917 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: CDCR, Tests Affects Versions: master (8.0) Reporter: Amrit Sarkar There are multiple methods in *_CdcrBidirectionalTest_* introduced SOLR-11003, can be used to create tests for other improvements like SOLR-12057, SOLR-12071 where we need to test multiple combinations of NRT, TLOG and PULL replicas. This JIRA tracks refactoring of test classes to create common framework methods, and improve the redundant code and remove thread sleeps if not necessary. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12793) Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package
[ https://issues.apache.org/jira/browse/SOLR-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662846#comment-16662846 ] ASF subversion and git services commented on SOLR-12793: Commit 71988c756b76a96cb96fc8f86183219a4a008389 in lucene-solr's branch refs/heads/master from [~varunthacker] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=71988c7 ] SOLR-12793: Move TestCloudJSONFacetJoinDomain amd TestCloudJSONFacetSKG to the facet test package > Move TestCloudJSONFacetJoinDomain / TestCloudJSONFacetSKG to the facet package > -- > > Key: SOLR-12793 > URL: https://issues.apache.org/jira/browse/SOLR-12793 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Trivial > > This proposal is to move the the following two classes to the facet test > package ( org.apache.solr.search.facet ) > - TestCloudJSONFacetJoinDomain > - TestCloudJSONFacetSKG -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Test Harness behaviour on a package run
Thanks Steve! That worked for me I'll go ahrad and make this as the default example to the "test-help" target - [help] ant test "-Dtests.class=org.apache.lucene.package.*" + [help] ant test "-Dtests.class=org.apache.lucene.package.Test*|org.apache.lucene.package.*Test" On Wed, Oct 24, 2018 at 7:20 AM Steve Rowe wrote: > This worked for me: > > ant clean test > "-Dtests.class=org.apache.solr.search.facet.Test*|org.apache.solr.search.facet.*Test" > > -- > Steve > www.lucidworks.com > > > On Oct 24, 2018, at 3:20 AM, Dawid Weiss wrote: > > > > There is no way for the runner to tell which class is a JUnit test > > class. Typically this is done with pattern matching on file names. > > common-build.xml converts this property to an file inclusion pattern > > (see tests.explicitclass) and if you include everything, the runner > > tries to load and inspect a class it knows nothing about... in fact I > > don't know why it's doing it because the runner itself has a "class > > name filter" it applies to classes before it initializes them -- the > > tests.class property should be passed directly to task > > (instead, an empty string is passed there). Perhaps this was done to > > minimize the number of scanned/ loaded files, but it's not the best > > idea. > > > > Dawid > > On Wed, Oct 24, 2018 at 2:39 AM Varun Thacker wrote: > >> > >> I wanted to run all tests within one package so I ran it like this > >> > >> ant clean test "-Dtests.class=org.apache.solr.search.facet.*" > >> > >> The test run fails because the harness is trying to run DebugAgg as > it's a public class while it's not really a test class. > >> > >> [junit4] Tests with failures [seed: EB7B560286FA14D0]: > >> [junit4] - org.apache.solr.search.facet.DebugAgg.initializationError > >> [junit4] - org.apache.solr.search.facet.DebugAgg.initializationError > >> > >> > >> Is there a way to avoid this? > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1162 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1162/ No tests ran. Build Log: [...truncated 23412 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2435 links (1987 relative) to 3184 anchors in 247 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail:
[jira] [Updated] (SOLR-12911) Include "refine" param (refinement SOLR-9432) to the FacetStream
[ https://issues.apache.org/jira/browse/SOLR-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12911: Attachment: SOLR-12911.patch > Include "refine" param (refinement SOLR-9432) to the FacetStream > - > > Key: SOLR-12911 > URL: https://issues.apache.org/jira/browse/SOLR-12911 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12911.patch, SOLR-12911.patch > > > SOLR-9432 introduces refinement in JSON Faceting which makes sure all the > respective buckets are fetched from each shard of the collection. This > ensures authentic mathematical bucket counts. > FacetStream should have refinement as an optional parameter like below, with > default being "false": > {code} > facet(collection1, > q="*:*", > refine="true", > buckets="a_s", > bucketSorts="sum(a_i) desc", > bucketSizeLimit=100, > sum(a_i), > count(*)) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8524) Nori (Korean) analyzer tokenization issues
[ https://issues.apache.org/jira/browse/LUCENE-8524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662833#comment-16662833 ] Trey Jones commented on LUCENE-8524: {quote}A can be discussed but I think it needs a separate issue since this is more a feature than a bug. This is a design choice and I am not sure that splitting is really an issue here. We could add a mode that join multiple alphabet together but it's not a major concern since this mixed terms should appear very rarely. {quote} All of the examples except for the _Мoscow_ one are taken from Korean Wikipedia or Wiktionary, so they do occur. Out of a sample of 10,000 random Korean Wikipedia articles (with ~2.4M tokens), 100 Cyrillic and 126 Greek tokens were affected. An additional 2758 ID-like tokens (e.g., _BH115E_) were affected. 96 Phonetic Alphabet tokens were affected. 769 tokens with apostrophes were affected, too; most were possessives with _’s,_ but also included were words like _An'gorso, Na’vi,_ and _O'Donnell._ Out of 2.4M tokens, these are rare, but there are still a lot of them—especially when you scale up 10K sample 43x to the full 430K articles on Wikipedia. It's definitely seems like a bug that a Greek word like _εἰμί_ gets split into three tokens, or _Ба̀лтичко̄_ gets split into four. The Greek seems to be the worse case, since _ἰ_ is in the “Greek Extended” Unicode block while the rest are “Greek and Coptic” block, which aren’t really different character sets. *Thanks for fixing B, D, and E!* > Nori (Korean) analyzer tokenization issues > -- > > Key: LUCENE-8524 > URL: https://issues.apache.org/jira/browse/LUCENE-8524 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Trey Jones >Priority: Major > Attachments: LUCENE-8524.patch > > > I opened this originally as an [Elastic > bug|https://github.com/elastic/elasticsearch/issues/34283#issuecomment-426940784], > but was asked to re-file it here. (Sorry for the poor formatting. > "pre-formatted" isn't behaving.) > *Elastic version* > { > "name" : "adOS8gy", > "cluster_name" : "elasticsearch", > "cluster_uuid" : "GVS7gpVBQDGwtHl3xnJbLw", > "version" : { > "number" : "6.4.0", > "build_flavor" : "default", > "build_type" : "deb", > "build_hash" : "595516e", > "build_date" : "2018-08-17T23:18:47.308994Z", > "build_snapshot" : false, > "lucene_version" : "7.4.0", > "minimum_wire_compatibility_version" : "5.6.0", > "minimum_index_compatibility_version" : "5.0.0" > }, > "tagline" : "You Know, for Search" > } > *Plugins installed:* [analysis-icu, analysis-nori] > *JVM version:* > openjdk version "1.8.0_181" > OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13) > OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode) > *OS version:* > Linux vagrantes6 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) > x86_64 GNU/Linux > *Description of the problem including expected versus actual behavior:* > I've uncovered a number of oddities in tokenization in the Nori analyzer. All > examples are from [Korean Wikipedia|https://ko.wikipedia.org/] or [Korean > Wiktionary|https://ko.wiktionary.org/] (including non-CJK examples). In rough > order of importance: > A. Tokens are split on different character POS types (which seem to not quite > line up with Unicode character blocks), which leads to weird results for > non-CJK tokens: > * εἰμί is tokenized as three tokens: ε/SL(Foreign language) + ἰ/SY(Other > symbol) + μί/SL(Foreign language) > * ka̠k̚t͡ɕ͈a̠k̚ is tokenized as ka/SL(Foreign language) + ̠/SY(Other symbol) > + k/SL(Foreign language) + ̚/SY(Other symbol) + t/SL(Foreign language) + > ͡ɕ͈/SY(Other symbol) + a/SL(Foreign language) + ̠/SY(Other symbol) + > k/SL(Foreign language) + ̚/SY(Other symbol) > * Ба̀лтичко̄ is tokenized as ба/SL(Foreign language) + ̀/SY(Other symbol) + > лтичко/SL(Foreign language) + ̄/SY(Other symbol) > * don't is tokenized as don + t; same for don’t (with a curly apostrophe). > * אוֹג׳וּ is tokenized as אוֹג/SY(Other symbol) + וּ/SY(Other symbol) > * Мoscow (with a Cyrillic М and the rest in Latin) is tokenized as м + oscow > While it is still possible to find these words using Nori, there are many > more chances for false positives when the tokens are split up like this. In > particular, individual numbers and combining diacritics are indexed > separately (e.g., in the Cyrillic example above), which can lead to a > performance hit on large corpora like Wiktionary or Wikipedia. > Work around: use a character filter to get rid of combining diacritics before > Nori processes the text. This doesn't solve the Greek, Hebrew, or English > cases, though. > Suggested fix: Characters in related Unicode blocks—like "Greek" and "Greek > Extended", or "Latin" and
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: SOLR-12795.patch > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: (was: SOLR-12795.patch) > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: (was: SOLR-12795.patch) > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: SOLR-12795.patch > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4891 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4891/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC 6 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([75436CE6411AB13C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest Error Message: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash Stack Trace: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash at __randomizedtesting.SeedInfo.seed([75436CE6411AB13C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-12827) Migrate cluster wide defaults syntax in cluster properties to a nested structure
[ https://issues.apache.org/jira/browse/SOLR-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662778#comment-16662778 ] ASF subversion and git services commented on SOLR-12827: Commit 16ee847663d67d1b003228d809dac6d5e30f2682 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16ee847 ] SOLR-12827: remove monospace in headings; correct formatting for heading level indicator; remove redundant "Example #" in the example titles > Migrate cluster wide defaults syntax in cluster properties to a nested > structure > > > Key: SOLR-12827 > URL: https://issues.apache.org/jira/browse/SOLR-12827 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12827.patch > > > SOLR-12387 introduced cluster wide defaults for collection properties. > However, the syntax committed is flat and ugly e.g. collectionDefaults > instead of defaults/collection inspite of the latter being proposed by > Andrzej. Anyway, we should change the format now before it sees widespread > use. I propose to fix documentation to describe the new format and > auto-convert cluster properties to new syntax on startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12827) Migrate cluster wide defaults syntax in cluster properties to a nested structure
[ https://issues.apache.org/jira/browse/SOLR-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662779#comment-16662779 ] ASF subversion and git services commented on SOLR-12827: Commit ac0b058773c1384190d867a4419beffac4eac810 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ac0b058 ] SOLR-12827: remove monospace in headings; correct formatting for heading level indicator; remove redundant "Example #" in the example titles > Migrate cluster wide defaults syntax in cluster properties to a nested > structure > > > Key: SOLR-12827 > URL: https://issues.apache.org/jira/browse/SOLR-12827 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12827.patch > > > SOLR-12387 introduced cluster wide defaults for collection properties. > However, the syntax committed is flat and ugly e.g. collectionDefaults > instead of defaults/collection inspite of the latter being proposed by > Andrzej. Anyway, we should change the format now before it sees widespread > use. I propose to fix documentation to describe the new format and > auto-convert cluster properties to new syntax on startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12916) Fail to parse queries for listeners added via Config API
Konstantin Gribov created SOLR-12916: Summary: Fail to parse queries for listeners added via Config API Key: SOLR-12916 URL: https://issues.apache.org/jira/browse/SOLR-12916 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: config-api Affects Versions: 7.5 Reporter: Konstantin Gribov {{//listener/queries}} should be array of {{NamedList}} but it can't be added via Config API (or configoverlay.json) since things like {{"queries": [ ["q", "*:*", "rows", 1] ]}} are deserialized into {{List}} and not into {{List}} required in {{QuerySenderListener:51}} (as of Solr 7.5.0). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 196 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/196/ 2 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([DBB568E8ACB6F209:D136D745E10DF953]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:572) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([DBB568E8ACB6F209:E23BD1A883493BF7]:0) at
[jira] [Commented] (SOLR-9317) ADDREPLICA command should be more flexible and add 'n' replicas to a collection,shard
[ https://issues.apache.org/jira/browse/SOLR-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662746#comment-16662746 ] ASF subversion and git services commented on SOLR-9317: --- Commit 16871e645e6f52afb77ad12b90555942ef1bea68 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16871e6 ] SOLR-9317: cleanup a couple of typos; add some description to addreplica examples > ADDREPLICA command should be more flexible and add 'n' replicas to a > collection,shard > - > > Key: SOLR-9317 > URL: https://issues.apache.org/jira/browse/SOLR-9317 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Noble Paul >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: Policeman.L-S-master-Linux.22903.consoleText.txt.gz, > SOLR-9317.patch, SOLR-9317.patch, SOLR-9317.patch, SOLR-9317.patch > > > It should automatically identify the nodes where these replicas should be > created as well -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9317) ADDREPLICA command should be more flexible and add 'n' replicas to a collection,shard
[ https://issues.apache.org/jira/browse/SOLR-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662745#comment-16662745 ] ASF subversion and git services commented on SOLR-9317: --- Commit b72f05dac63f88929056627525857f45b303154b in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b72f05d ] SOLR-9317: cleanup a couple of typos; add some description to addreplica examples > ADDREPLICA command should be more flexible and add 'n' replicas to a > collection,shard > - > > Key: SOLR-9317 > URL: https://issues.apache.org/jira/browse/SOLR-9317 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Noble Paul >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: Policeman.L-S-master-Linux.22903.consoleText.txt.gz, > SOLR-9317.patch, SOLR-9317.patch, SOLR-9317.patch, SOLR-9317.patch > > > It should automatically identify the nodes where these replicas should be > created as well -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12913) Streaming Expressions / Math Expressions docs for 7.6 release
[ https://issues.apache.org/jira/browse/SOLR-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12913: -- Attachment: SOLR-12913.patch > Streaming Expressions / Math Expressions docs for 7.6 release > - > > Key: SOLR-12913 > URL: https://issues.apache.org/jira/browse/SOLR-12913 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-12913.patch, SOLR-12913.patch > > > This ticket will add docs for new Streaming Expressions and Math Expressions > for Solr 7.6 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9425) TestSolrConfigHandlerConcurrent failure: NullPointerException
[ https://issues.apache.org/jira/browse/SOLR-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-9425. --- Resolution: Fixed Assignee: Christine Poerschke Fix Version/s: master (8.0) 7.6 > TestSolrConfigHandlerConcurrent failure: NullPointerException > - > > Key: SOLR-9425 > URL: https://issues.apache.org/jira/browse/SOLR-9425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Christine Poerschke >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: Lucene-Solr-tests-master.7870.consoleText.txt, > SOLR-9425.patch > > > From my Jenkins - does not reproduce for me - I'll attach the Jenkins log: > {noformat} > Checking out Revision f8536ce72606af6c75cf9137f354da57bb0f3dbc > (refs/remotes/origin/master) > [...] > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestSolrConfigHandlerConcurrent -Dtests.method=test > -Dtests.seed=7B5B674C5C76D216 -Dtests.slow=true -Dtests.locale=lt > -Dtests.timezone=Antarctica/Troll -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] ERROR 19.9s J6 | TestSolrConfigHandlerConcurrent.test <<< > [junit4]> Throwable #1: java.lang.NullPointerException > [junit4]> at > __randomizedtesting.SeedInfo.seed([7B5B674C5C76D216:F30F5896F28ABFEE]:0) > [junit4]> at > org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) > [junit4]> at java.lang.Thread.run(Thread.java:745) > {noformat} > Looking through email notifications, this happens fairly regularly on ASF and > Policeman Jenkins in addition to mine - the earliest I found was on February > 12, 2015. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12772) Correct dotProduct parameters+syntax documentation
[ https://issues.apache.org/jira/browse/SOLR-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-12772. Resolution: Fixed Assignee: Christine Poerschke Fix Version/s: master (8.0) 7.6 > Correct dotProduct parameters+syntax documentation > -- > > Key: SOLR-12772 > URL: https://issues.apache.org/jira/browse/SOLR-12772 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12772.patch > > > Stumbled across it being documented as {{dotProduct(numericArray)}} when it > should be {{dotProduct(numericArray, numericArray)}} - no? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12905) reproducible MultiSolrCloudTestCaseTest test failure
[ https://issues.apache.org/jira/browse/SOLR-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-12905. Resolution: Fixed Fix Version/s: master (8.0) 7.6 > reproducible MultiSolrCloudTestCaseTest test failure > > > Key: SOLR-12905 > URL: https://issues.apache.org/jira/browse/SOLR-12905 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12905.patch > > > We've seen a few of these in Jenkins via the dev list > https://lists.apache.org/list.html?dev@lucene.apache.org:lte=1y:%22duplicate%20clusterId%22 > e.g. > {code} > FAILED: > junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest > Error Message: duplicate clusterId cloud1 Stack Trace: > java.lang.AssertionError: duplicate clusterId cloud1 at > __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0) at > org.junit.Assert.fail(Assert.java:93) at > org.junit.Assert.assertTrue(Assert.java:43) at > org.junit.Assert.assertFalse(Assert.java:68) at > org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93) > at > org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53) > ... > {code} > With a big of digging I was able to reliably reproduce it by using > {{-Dtests.dups=N}} (which normally runs in multiple JVMs in parallel) > together with {{-Dtests.jvms=1}} constraint so that the tests actually run > sequentially in one JVM i.e. altogether > {code} > ant test -Dtests.dups=10 -Dtests.jvms=1 -Dtestcase=MultiSolrCloudTestCaseTest > {code} > The fix is simple i.e. the static {{clusterId2collection}} variable needs to > be cleared in @AfterClass as someone ([~janhoy] ?) already mentioned > somewhere elsewhere I think. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets
[ https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662715#comment-16662715 ] Mikhail Khludnev commented on SOLR-12875: - Ok. Could {{resize}} be necessary for DV hash only? It seems like we need to add a case for {{uniqueBlock()}} in {{TestJsonFacets}} and it can catch while enumerating methods. > ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets > > > Key: SOLR-12875 > URL: https://issues.apache.org/jira/browse/SOLR-12875 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Affects Versions: 7.5 >Reporter: Tim Underwood >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some > requests when trying to make use of > {noformat} > uniqueBlock(_root_){noformat} > within JSON Facets. > Here are some example Stack Traces: > {noformat} > 2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [ x:my_core] > o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 > out of bounds for length 8 > at > org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40) > at > org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85) > at > org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395) > at > org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472) > at > org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429) > at > org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > {noformat} > > Here is another one at a different location in UniqueBlockAgg: > > {noformat} > 2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [ x:my_core] > o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 > out of bounds for length 16 > at > org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59) > at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146) > at > org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431) > at > org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472) > at > org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429) > at > org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at >
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2971 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2971/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseParallelGC 19 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime Error Message: Error from server at https://127.0.0.1:3/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:3/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([EDC7146314BECABA:31F6FF25AC11F29]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at
[jira] [Commented] (SOLR-9425) TestSolrConfigHandlerConcurrent failure: NullPointerException
[ https://issues.apache.org/jira/browse/SOLR-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662710#comment-16662710 ] ASF subversion and git services commented on SOLR-9425: --- Commit e6756895c9c95e58afed23ffd03776fa9c0699a7 in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e675689 ] SOLR-9425: fix NullPointerException in TestSolrConfigHandlerConcurrent > TestSolrConfigHandlerConcurrent failure: NullPointerException > - > > Key: SOLR-9425 > URL: https://issues.apache.org/jira/browse/SOLR-9425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Major > Attachments: Lucene-Solr-tests-master.7870.consoleText.txt, > SOLR-9425.patch > > > From my Jenkins - does not reproduce for me - I'll attach the Jenkins log: > {noformat} > Checking out Revision f8536ce72606af6c75cf9137f354da57bb0f3dbc > (refs/remotes/origin/master) > [...] > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestSolrConfigHandlerConcurrent -Dtests.method=test > -Dtests.seed=7B5B674C5C76D216 -Dtests.slow=true -Dtests.locale=lt > -Dtests.timezone=Antarctica/Troll -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] ERROR 19.9s J6 | TestSolrConfigHandlerConcurrent.test <<< > [junit4]> Throwable #1: java.lang.NullPointerException > [junit4]> at > __randomizedtesting.SeedInfo.seed([7B5B674C5C76D216:F30F5896F28ABFEE]:0) > [junit4]> at > org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) > [junit4]> at java.lang.Thread.run(Thread.java:745) > {noformat} > Looking through email notifications, this happens fairly regularly on ASF and > Policeman Jenkins in addition to mine - the earliest I found was on February > 12, 2015. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12905) reproducible MultiSolrCloudTestCaseTest test failure
[ https://issues.apache.org/jira/browse/SOLR-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662707#comment-16662707 ] ASF subversion and git services commented on SOLR-12905: Commit 27bc9ccfb56e6f631a1f2810d49b5bb96d734f01 in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=27bc9cc ] SOLR-12905: MultiSolrCloudTestCase now clears static clusterId2cluster in @AfterClass > reproducible MultiSolrCloudTestCaseTest test failure > > > Key: SOLR-12905 > URL: https://issues.apache.org/jira/browse/SOLR-12905 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12905.patch > > > We've seen a few of these in Jenkins via the dev list > https://lists.apache.org/list.html?dev@lucene.apache.org:lte=1y:%22duplicate%20clusterId%22 > e.g. > {code} > FAILED: > junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest > Error Message: duplicate clusterId cloud1 Stack Trace: > java.lang.AssertionError: duplicate clusterId cloud1 at > __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0) at > org.junit.Assert.fail(Assert.java:93) at > org.junit.Assert.assertTrue(Assert.java:43) at > org.junit.Assert.assertFalse(Assert.java:68) at > org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93) > at > org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53) > ... > {code} > With a big of digging I was able to reliably reproduce it by using > {{-Dtests.dups=N}} (which normally runs in multiple JVMs in parallel) > together with {{-Dtests.jvms=1}} constraint so that the tests actually run > sequentially in one JVM i.e. altogether > {code} > ant test -Dtests.dups=10 -Dtests.jvms=1 -Dtestcase=MultiSolrCloudTestCaseTest > {code} > The fix is simple i.e. the static {{clusterId2collection}} variable needs to > be cleared in @AfterClass as someone ([~janhoy] ?) already mentioned > somewhere elsewhere I think. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST
[ https://issues.apache.org/jira/browse/SOLR-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662708#comment-16662708 ] ASF subversion and git services commented on SOLR-12873: Commit d6fde877996c814be733a73e27cab3689fdef34d in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6fde87 ] SOLR-12873: Replace few remaining occurrences of LUCENE_CURRENT with LATEST for luceneMatchVersion. > A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST > --- > > Key: SOLR-12873 > URL: https://issues.apache.org/jira/browse/SOLR-12873 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12873.patch > > > There are a few config files still referring to {{LUCENE_CURRENT}} instead of > {{LATEST}}. This is to remove them, following on from LUCENE-5901 a while > back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12772) Correct dotProduct parameters+syntax documentation
[ https://issues.apache.org/jira/browse/SOLR-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662711#comment-16662711 ] ASF subversion and git services commented on SOLR-12772: Commit 74c4ea9ba2fee95bc3839f8a1f2a6ac254e439ed in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74c4ea9 ] SOLR-12772: Correct dotProduct parameters+syntax documentation > Correct dotProduct parameters+syntax documentation > -- > > Key: SOLR-12772 > URL: https://issues.apache.org/jira/browse/SOLR-12772 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-12772.patch > > > Stumbled across it being documented as {{dotProduct(numericArray)}} when it > should be {{dotProduct(numericArray, numericArray)}} - no? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12905) reproducible MultiSolrCloudTestCaseTest test failure
[ https://issues.apache.org/jira/browse/SOLR-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662689#comment-16662689 ] ASF subversion and git services commented on SOLR-12905: Commit 7fc91deaba25ae91bc9b2c4ae2875fc74c2c19aa in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7fc91de ] SOLR-12905: MultiSolrCloudTestCase now clears static clusterId2cluster in @AfterClass > reproducible MultiSolrCloudTestCaseTest test failure > > > Key: SOLR-12905 > URL: https://issues.apache.org/jira/browse/SOLR-12905 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12905.patch > > > We've seen a few of these in Jenkins via the dev list > https://lists.apache.org/list.html?dev@lucene.apache.org:lte=1y:%22duplicate%20clusterId%22 > e.g. > {code} > FAILED: > junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest > Error Message: duplicate clusterId cloud1 Stack Trace: > java.lang.AssertionError: duplicate clusterId cloud1 at > __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0) at > org.junit.Assert.fail(Assert.java:93) at > org.junit.Assert.assertTrue(Assert.java:43) at > org.junit.Assert.assertFalse(Assert.java:68) at > org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93) > at > org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53) > ... > {code} > With a big of digging I was able to reliably reproduce it by using > {{-Dtests.dups=N}} (which normally runs in multiple JVMs in parallel) > together with {{-Dtests.jvms=1}} constraint so that the tests actually run > sequentially in one JVM i.e. altogether > {code} > ant test -Dtests.dups=10 -Dtests.jvms=1 -Dtestcase=MultiSolrCloudTestCaseTest > {code} > The fix is simple i.e. the static {{clusterId2collection}} variable needs to > be cleared in @AfterClass as someone ([~janhoy] ?) already mentioned > somewhere elsewhere I think. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST
[ https://issues.apache.org/jira/browse/SOLR-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662690#comment-16662690 ] ASF subversion and git services commented on SOLR-12873: Commit c277674f0ed5ab810432a1ad18c174f75dd4a9be in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c277674 ] SOLR-12873: Replace few remaining occurrences of LUCENE_CURRENT with LATEST for luceneMatchVersion. > A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST > --- > > Key: SOLR-12873 > URL: https://issues.apache.org/jira/browse/SOLR-12873 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12873.patch > > > There are a few config files still referring to {{LUCENE_CURRENT}} instead of > {{LATEST}}. This is to remove them, following on from LUCENE-5901 a while > back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9425) TestSolrConfigHandlerConcurrent failure: NullPointerException
[ https://issues.apache.org/jira/browse/SOLR-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662691#comment-16662691 ] ASF subversion and git services commented on SOLR-9425: --- Commit ab14cc9566e15743eb168b36ca63d2b3197ba0a1 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab14cc9 ] SOLR-9425: fix NullPointerException in TestSolrConfigHandlerConcurrent > TestSolrConfigHandlerConcurrent failure: NullPointerException > - > > Key: SOLR-9425 > URL: https://issues.apache.org/jira/browse/SOLR-9425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Major > Attachments: Lucene-Solr-tests-master.7870.consoleText.txt, > SOLR-9425.patch > > > From my Jenkins - does not reproduce for me - I'll attach the Jenkins log: > {noformat} > Checking out Revision f8536ce72606af6c75cf9137f354da57bb0f3dbc > (refs/remotes/origin/master) > [...] > [junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestSolrConfigHandlerConcurrent -Dtests.method=test > -Dtests.seed=7B5B674C5C76D216 -Dtests.slow=true -Dtests.locale=lt > -Dtests.timezone=Antarctica/Troll -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > [junit4] ERROR 19.9s J6 | TestSolrConfigHandlerConcurrent.test <<< > [junit4]> Throwable #1: java.lang.NullPointerException > [junit4]> at > __randomizedtesting.SeedInfo.seed([7B5B674C5C76D216:F30F5896F28ABFEE]:0) > [junit4]> at > org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) > [junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) > [junit4]> at java.lang.Thread.run(Thread.java:745) > {noformat} > Looking through email notifications, this happens fairly regularly on ASF and > Policeman Jenkins in addition to mine - the earliest I found was on February > 12, 2015. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12772) Correct dotProduct parameters+syntax documentation
[ https://issues.apache.org/jira/browse/SOLR-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662692#comment-16662692 ] ASF subversion and git services commented on SOLR-12772: Commit e62fe459834a287cccf01f25ec03b8764f7a5af7 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e62fe45 ] SOLR-12772: Correct dotProduct parameters+syntax documentation > Correct dotProduct parameters+syntax documentation > -- > > Key: SOLR-12772 > URL: https://issues.apache.org/jira/browse/SOLR-12772 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-12772.patch > > > Stumbled across it being documented as {{dotProduct(numericArray)}} when it > should be {{dotProduct(numericArray, numericArray)}} - no? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1769 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1769/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1677/consoleText [repro] Revision: e083b1501ebe21abaf95bcd93f89af12142fd1ee [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=RestartWhileUpdatingTest -Dtests.method=test -Dtests.seed=6C40E881A770B184 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-BH -Dtests.timezone=Greenwich -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] Repro line: ant test -Dtestcase=RestartWhileUpdatingTest -Dtests.seed=6C40E881A770B184 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-BH -Dtests.timezone=Greenwich -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [repro] git fetch [repro] git checkout e083b1501ebe21abaf95bcd93f89af12142fd1ee [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] RestartWhileUpdatingTest [repro] ant compile-test [...truncated 3423 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.RestartWhileUpdatingTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=6C40E881A770B184 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-BH -Dtests.timezone=Greenwich -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 33296 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.RestartWhileUpdatingTest [repro] git checkout bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST
[ https://issues.apache.org/jira/browse/SOLR-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662665#comment-16662665 ] Christine Poerschke commented on SOLR-12873: bq. The patch doesn't appear to include any new or modified tests. ... Interesting that it did not consider the {{solrconfig*.xml}} files to be test files. Still, this is an atypical patch I guess. bq. ... please list what manual steps were performed to verify this patch. Manually identified what tests use the files being modified and manually ran (a few of) the tests. > A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST > --- > > Key: SOLR-12873 > URL: https://issues.apache.org/jira/browse/SOLR-12873 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12873.patch > > > There are a few config files still referring to {{LUCENE_CURRENT}} instead of > {{LATEST}}. This is to remove them, following on from LUCENE-5901 a while > back. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662659#comment-16662659 ] Varun Thacker commented on SOLR-12057: -- {quote}I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. {quote} I'll leave it up to you since you're putting in the work. If you think this will derail the main goal of the Jira by all means create a separate Jira to tackle it separately. {quote}Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. {quote} Again feel free to incorporate that in the next iteration or create a separate jira. {quote}Sure, I will include the parent-child doc; {quote} +1 > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662648#comment-16662648 ] Amrit Sarkar edited comment on SOLR-12057 at 10/24/18 6:24 PM: --- Thanks Varun for the detailed feedback, The entire test *CdcrBidirectionalTest* has been a copy of the *CdcrBidirectionalTest*, which gets its framework from *CdcrBootstrapTest*, keeping the uniformity in place. All the points mentioned above are essentially framework snippets from *CdcrBootstrapTest*. I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. Other points; bq. After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. bq. I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? Sure, I will include the parent-child doc; though in-place updates are not supported for forwarding in CDCR. I can see how much effort is required for that. Jira: SOLR-12105. was (Author: sarkaramr...@gmail.com): Thanks Varun for the detailed feedback, The entire test {CdcrBidirectionalTest} has been a copy of the {CdcrBidirectionalTest}, which gets its framework from {CdcrBootstrapTest}, keeping the uniformity in place. All the points mentioned above are essentially framework snippets from {CdcrBootstrapTest}. I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. Other points; bq. After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. bq. I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? Sure, I will include the parent-child doc; though in-place updates are not supported for forwarding in CDCR. I can see how much effort is required for that. Jira: SOLR-12105. > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 >
[jira] [Comment Edited] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662648#comment-16662648 ] Amrit Sarkar edited comment on SOLR-12057 at 10/24/18 6:23 PM: --- Thanks Varun for the detailed feedback, The entire test {CdcrBidirectionalTest} has been a copy of the {CdcrBidirectionalTest}, which gets its framework from {CdcrBootstrapTest}, keeping the uniformity in place. All the points mentioned above are essentially framework snippets from {CdcrBootstrapTest}. I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. Other points; bq. After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. bq. I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? Sure, I will include the parent-child doc; though in-place updates are not supported for forwarding in CDCR. I can see how much effort is required for that. Jira: SOLR-12105. was (Author: sarkaramr...@gmail.com): Thanks Varun for the detailed feedback, The entire test {CdcrBidirectionalTest} has been a copy of the {CdcrBidirectionalTest}, which gets its framework from {CdcrBootstrapTest}; keeping the uniformity in place. All the points mentioned above are essentially framework snippets from {CdcrBootstrapTest}. I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. Other points; bq. After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. bq. I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? Sure, I will include the parent-child doc; though in-place updates are not supported for forwarding in CDCR. I can see how much effort is required for that. Jira: SOLR-12105. > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 >
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662648#comment-16662648 ] Amrit Sarkar commented on SOLR-12057: - Thanks Varun for the detailed feedback, The entire test {CdcrBidirectionalTest} has been a copy of the {CdcrBidirectionalTest}, which gets its framework from {CdcrBootstrapTest}; keeping the uniformity in place. All the points mentioned above are essentially framework snippets from {CdcrBootstrapTest}. I strongly agree with consolidating CdcrBidirectinalTest with the test in this patch, and potentially for Cdcr support for pull replicas fix too. Seeking advice on whether we should do it under this Jira or create new one. Other points; bq. After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries. bq. I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? Sure, I will include the parent-child doc; though in-place updates are not supported for forwarding in CDCR. I can see how much effort is required for that. Jira: SOLR-12105. > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. >
[jira] [Updated] (SOLR-12105) Support Inplace updates in CDCR.
[ https://issues.apache.org/jira/browse/SOLR-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12105: Description: Inplace updates (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) are not forwarded to target clusters as of today. It would be nice addition. (was: Inplace updates are not forwarded to target clusters as of today. It would be nice addition.) > Support Inplace updates in CDCR. > > > Key: SOLR-12105 > URL: https://issues.apache.org/jira/browse/SOLR-12105 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Reporter: Amrit Sarkar >Priority: Minor > > Inplace updates > (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) > are not forwarded to target clusters as of today. It would be nice addition. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12105) Support Inplace updates in CDCR.
[ https://issues.apache.org/jira/browse/SOLR-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12105: Description: Inplace updates (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) are not forwarded to target clusters as of today. It would be nice addition to have. (was: Inplace updates (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) are not forwarded to target clusters as of today. It would be nice addition.) > Support Inplace updates in CDCR. > > > Key: SOLR-12105 > URL: https://issues.apache.org/jira/browse/SOLR-12105 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Reporter: Amrit Sarkar >Priority: Minor > > Inplace updates > (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) > are not forwarded to target clusters as of today. It would be nice addition > to have. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12105) Support Inplace updates in CDCR.
[ https://issues.apache.org/jira/browse/SOLR-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12105: Description: Inplace updates (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) are not forwarded to target clusters as of today. It would be a nice addition to have. (was: Inplace updates (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) are not forwarded to target clusters as of today. It would be nice addition to have.) > Support Inplace updates in CDCR. > > > Key: SOLR-12105 > URL: https://issues.apache.org/jira/browse/SOLR-12105 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Reporter: Amrit Sarkar >Priority: Minor > > Inplace updates > (https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-In-PlaceUpdates) > are not forwarded to target clusters as of today. It would be a nice > addition to have. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2897 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2897/ 3 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:34450_solr, 127.0.0.1:37321_solr, 127.0.0.1:38657_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:42199/solr;, "node_name":"127.0.0.1:42199_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:42199/solr;, "node_name":"127.0.0.1:42199_solr", "state":"down", "type":"NRT"}, "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:37321/solr;, "node_name":"127.0.0.1:37321_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:34450_solr, 127.0.0.1:37321_solr, 127.0.0.1:38657_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:42199/solr;, "node_name":"127.0.0.1:42199_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:42199/solr;, "node_name":"127.0.0.1:42199_solr", "state":"down", "type":"NRT"}, "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:37321/solr;, "node_name":"127.0.0.1:37321_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([29EA438F69AA7667:43FC225F01583CAD]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at
[jira] [Updated] (SOLR-12105) Support Inplace updates in CDCR.
[ https://issues.apache.org/jira/browse/SOLR-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12105: Summary: Support Inplace updates in CDCR. (was: Support Inplace updates in cdcr) > Support Inplace updates in CDCR. > > > Key: SOLR-12105 > URL: https://issues.apache.org/jira/browse/SOLR-12105 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Reporter: Amrit Sarkar >Priority: Minor > > Inplace updates are not forwarded to target clusters as of today. It would be > nice addition. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets
[ https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662635#comment-16662635 ] Tim Underwood commented on SOLR-12875: -- [~mkhludnev] Can you review this? > ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets > > > Key: SOLR-12875 > URL: https://issues.apache.org/jira/browse/SOLR-12875 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Affects Versions: 7.5 >Reporter: Tim Underwood >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some > requests when trying to make use of > {noformat} > uniqueBlock(_root_){noformat} > within JSON Facets. > Here are some example Stack Traces: > {noformat} > 2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [ x:my_core] > o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 > out of bounds for length 8 > at > org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40) > at > org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85) > at > org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395) > at > org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472) > at > org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429) > at > org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > {noformat} > > Here is another one at a different location in UniqueBlockAgg: > > {noformat} > 2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [ x:my_core] > o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 > out of bounds for length 16 > at > org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59) > at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146) > at > org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431) > at > org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249) > at > org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472) > at > org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429) > at > org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64) > at > org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368) > at > org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > {noformat} > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 23086 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23086/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC 16 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([250E9960E0FE53A4]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([250E9960E0FE53A4]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:297) at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Assigned] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-12057: Assignee: Varun Thacker > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662607#comment-16662607 ] Varun Thacker commented on SOLR-12057: -- Hi Amrit, Thanks for the patch! Here's some feedback from just the test case * CdcrWithDiffReplicaTypesTest -> CdcrReplicaTypeTest - Maybe this is enough to convey the test intention? * Some unused imports would need to be removed * Any reason we're hardcoding StandardDirectoryFactory instead of using of letting the test framework pick one? * After CdcrTestsUtil.cdcrStart(cluster1SolrClient); do we need to sleep for 2 seconds? When I see the usage of cdcrStart , I see that some usage has a 2s sleep and some don't . * Can we simply the variable naming in this loop. It's adding a batch of docs right? "docs" is esentially how many batches of 100 docs will we index? Maybe numBatches? * {code:java} int docs = (TEST_NIGHTLY ? 100 : 10); int numDocs_c1 = 0; for (int k = 0; k < docs; k++) { req = new UpdateRequest(); for (; numDocs_c1 < (k + 1) * 100; numDocs_c1++) { SolrInputDocument doc = new SolrInputDocument(); doc.addField("id", "cluster1_" + numDocs_c1); doc.addField("xyz", numDocs_c1); req.add(doc); } req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true); log.info("Adding " + docs + " docs with commit=true, numDocs=" + numDocs_c1); req.process(cluster1SolrClient); }{code} * It would be really cool if we pulled the meat of the test into a separate method. The method would take two cloud solr client objects ( for the two clusters ). That way we could test all 3 replica types in the same place by calling this method. Perhaps consolidate CdcrBidirectionalTest as well? * I really like how this test checks for all operations to make sure they work correctly. perhaps we could expand it to add a parent-child document and an in-place update as well? > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is
[jira] [Commented] (SOLR-12259) Robustly upgrade indexes
[ https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662584#comment-16662584 ] Steve Rowe commented on SOLR-12259: --- In addition to the issues mentioned on SOLR-11127 (upgrading .system collection schema from Trie to Points fields), Solr needs somehow to be able to re-index the .system collection on major version upgrade, because of the unsupported index version N-2 problem. > Robustly upgrade indexes > > > Key: SOLR-12259 > URL: https://issues.apache.org/jira/browse/SOLR-12259 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > The general problem statement is that the current upgrade path is trappy and > cumbersome. It would be a great help "in the field" to make the upgrade > process less painful. > Additionally one of the most common things users want to do is enable > docValues, but currently they often have to re-index. > Issues: > 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go > to 7x all the segments have been rewritten in 6x format. Say I have a segment > at max size that has no deletions. It'll never be rewritten until it has > deleted docs. And perhaps 50% deleted docs currently. > 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad. > 3> in a large distributed system, running IndexUpgraderTool on all the nodes > is cumbersome even if <2> is acceptable. > 4> Users who realize specifying docValues on a field would be A Good Thing > have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be > nice to be able to have this done all at once without forceMerging to one > segment. > Proposal: > Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. > It will make TMP respect max segments size so can avoid forceMerges that > result in one segment. What it does _not_ do is rewrite segments with zero > (or a small percentage) deleted documents. > So it doesn't seem like a huge stretch to be able to specify to TMP the > option to rewrite segments that have no deleted documents. Perhaps a new > parameter to optimize? > This would likely require another change to TMP or whatever. > So upgrading to a new solr would look like > 1> install the new Solr > 2> execute > "http://node:port/solr/collection_or_core/update?optimize=true=true; > What's not clear to me is whether we'd require > UninvertDocValuesMergePolicyFactory to be specified and wrap TMP or not. > Anyway, let's discuss. I'll create yet another LUCENE JIRA for TMP do rewrite > all segments that I'll link. > I'll also link several other JIRAs in here, they're coalescing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: BadApple reports
Thank you for this effort nonetheless, Erick. Dawid On Wed, Oct 24, 2018 at 6:53 PM Erick Erickson wrote: > > We're making forward progress on the whole testing situation, so I'll > stop doing BadApple commits. I'll continue to pull down Hoss' reports > weekly to have a history that we can mine, but I'd rather put my > efforts in to the root problem than record-keeping. > > By collecting Hoss' reports I can always pick this up again if desirable. > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12829) Add plist (parallel list) Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662561#comment-16662561 ] Varun Thacker commented on SOLR-12829: -- Hi Joel, I noticed that the commit doesn't have a CHANGES entry. > Add plist (parallel list) Streaming Expression > -- > > Key: SOLR-12829 > URL: https://issues.apache.org/jira/browse/SOLR-12829 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-12829.patch, SOLR-12829.patch > > > The *plist* Streaming Expression wraps any number of streaming expressions > and opens them in parallel. The results of each of the streams are then > iterated in the order they appear in the list. Since many streams perform > heavy pushed down operations when opened, like the FacetStream, this will > result in the parallelization of these operations. For example plist could > wrap several facet() expressions and open them each in parallel, which would > cause the facets to be run in parallel, on different replicas in the cluster. > Here is sample syntax: > {code:java} > plist(tuple(facet1=facet(...)), > tuple(facet2=facet(...)), > tuple(facet3=facet(...))) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12259) Robustly upgrade indexes
[ https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662554#comment-16662554 ] Erick Erickson commented on SOLR-12259: --- bq. can deal with "safe" X+2 upgrades if there ever are any. Certainly X+2 is not supported. If (and only if) there are situations where it _would_ be safe, then it could conceivably do some low-level tricks to spoof the rewritten segments with version stamp X. That is _not_ something I advocate be part of Solr, but with this notion it's possible to do if an existing Solr installation wants to. I'm not sure index locking is necessary, conceptually it's simpler and I wanted to throw it out. Another idea is to rewrite just a few (or just one) segment(s) at a time that needed whatever modification the merge policy is implementing and optionally do a commit periodically to flush the fieldCache. Or flush the field cache as a byproduct. Or The point would be to keep the fieldCache from getting too big. One thing I'm not sure of (this is brainstorming after all) is _if_ we do use a new endoint that rewrites segments, and assuming we don't lock out updates, insure that any segments rewritten by this new endpoint don't _also_ get merged by the background merging process. > Robustly upgrade indexes > > > Key: SOLR-12259 > URL: https://issues.apache.org/jira/browse/SOLR-12259 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > The general problem statement is that the current upgrade path is trappy and > cumbersome. It would be a great help "in the field" to make the upgrade > process less painful. > Additionally one of the most common things users want to do is enable > docValues, but currently they often have to re-index. > Issues: > 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go > to 7x all the segments have been rewritten in 6x format. Say I have a segment > at max size that has no deletions. It'll never be rewritten until it has > deleted docs. And perhaps 50% deleted docs currently. > 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad. > 3> in a large distributed system, running IndexUpgraderTool on all the nodes > is cumbersome even if <2> is acceptable. > 4> Users who realize specifying docValues on a field would be A Good Thing > have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be > nice to be able to have this done all at once without forceMerging to one > segment. > Proposal: > Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. > It will make TMP respect max segments size so can avoid forceMerges that > result in one segment. What it does _not_ do is rewrite segments with zero > (or a small percentage) deleted documents. > So it doesn't seem like a huge stretch to be able to specify to TMP the > option to rewrite segments that have no deleted documents. Perhaps a new > parameter to optimize? > This would likely require another change to TMP or whatever. > So upgrading to a new solr would look like > 1> install the new Solr > 2> execute > "http://node:port/solr/collection_or_core/update?optimize=true=true; > What's not clear to me is whether we'd require > UninvertDocValuesMergePolicyFactory to be specified and wrap TMP or not. > Anyway, let's discuss. I'll create yet another LUCENE JIRA for TMP do rewrite > all segments that I'll link. > I'll also link several other JIRAs in here, they're coalescing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7582 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7582/ Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC 5 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([6D403BDD2588C84A]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([6D403BDD2588C84A]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:297) at sun.reflect.GeneratedMethodAccessor60.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
BadApple reports
We're making forward progress on the whole testing situation, so I'll stop doing BadApple commits. I'll continue to pull down Hoss' reports weekly to have a history that we can mine, but I'd rather put my efforts in to the root problem than record-keeping. By collecting Hoss' reports I can always pick this up again if desirable. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12915) Fix all Solr tests to work with ant beast.
Mark Miller created SOLR-12915: -- Summary: Fix all Solr tests to work with ant beast. Key: SOLR-12915 URL: https://issues.apache.org/jira/browse/SOLR-12915 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mark Miller Assignee: Mark Miller Many tests will fail when beasted by the built-in method because you must be able to run a test method multiple times in a row without @BeforeClass @AfterClass being called between. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH
[ https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662504#comment-16662504 ] Steve Rowe commented on SOLR-12879: --- {{QueryEqualityTest}} is failing 100% of the time w/o a seed, e.g. from [https://builds.apache.org/job/Lucene-Solr-Tests-master/2895]: {noformat} Checking out Revision 3e89b7a771639aacaed6c21406624a2b27231dd7 (refs/remotes/origin/master) [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=QueryEqualityTest -Dtests.seed=40E6483843AE2CD1 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-SG -Dtests.timezone=America/Los_Angeles -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.00s J1 | QueryEqualityTest (suite) <<< [junit4]> Throwable #1: java.lang.AssertionError: testParserCoverage was run w/o any other method explicitly testing qparser: min_hash [junit4]>at __randomizedtesting.SeedInfo.seed([40E6483843AE2CD1]:0) [junit4]>at org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59) [junit4]>at java.lang.Thread.run(Thread.java:748) {noformat} > Query Parser for MinHash/LSH > > > Key: SOLR-12879 > URL: https://issues.apache.org/jira/browse/SOLR-12879 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: master (8.0) >Reporter: Andy Hind >Assignee: Tommaso Teofili >Priority: Major > Fix For: master (8.0) > > Attachments: minhash.filter.adoc.fragment, minhash.patch > > > Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide > a query parser that builds queries that provide a measure of Jaccard > similarity. The initial patch includes banded queries that were also proposed > on the original issue. > > I have one outstanding questions: > * Should the score from the overall query be normalised? > Note, that the band count is currently approximate and may be one less than > in practise. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662483#comment-16662483 ] Amrit Sarkar commented on SOLR-12057: - Uploaded fresh patch, which can be applied against master branch. Running beast tests to see if the solution is intact. With the current solution, CdcrUpdateProcessorFactory is unnecessary; as we can check at DistributedUpdateProcessor level whether the incoming update is from CDCR or not. > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1767 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1767/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/353/consoleText [repro] Revision: ffabbaf1f2a34a29dd9416cfd84fbfe93b7ad227 [repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 [repro] Repro line: ant test -Dtestcase=SystemLogListenerTest -Dtests.method=test -Dtests.seed=4C0CC9C824AAFE61 -Dtests.multiplier=2 -Dtests.locale=el -Dtests.timezone=America/Pangnirtung -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [repro] git fetch [repro] git checkout ffabbaf1f2a34a29dd9416cfd84fbfe93b7ad227 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] SystemLogListenerTest [repro] ant compile-test [...truncated 3436 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.SystemLogListenerTest" -Dtests.showOutput=onerror -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 -Dtests.seed=4C0CC9C824AAFE61 -Dtests.multiplier=2 -Dtests.locale=el -Dtests.timezone=America/Pangnirtung -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 2406 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.autoscaling.SystemLogListenerTest [repro] git checkout bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: SOLR-12057.patch > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12914) Solr crashes in /terms request handler
[ https://issues.apache.org/jira/browse/SOLR-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vadim Miller updated SOLR-12914: Description: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} exception if the resulting list is too long. Solr stops working on this shard after this exception, only restart helps. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. was: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. > Solr crashes in /terms request handler > -- > > Key: SOLR-12914 > URL: https://issues.apache.org/jira/browse/SOLR-12914 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.5 >Reporter: Vadim Miller >Priority: Major > Labels: terms > Attachments: terms.patch > > > TermsComponent class always tries to fetch all terms from all shards for a > further processing. There is {{java.lang.OutOfMemoryError}} exception if > the resulting list is too long. Solr stops working on this shard after this > exception, only restart helps. > There is a very common use case when the full terms list is not required: a > client needs to see next N terms in alphabetically sorted list starting with > a given value. Usually, this is needed for some autocomplete field on a page. > Example URL: > > {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} > > In this example TermsComponent needs to fetch only 50 terms from each shard > starting with a value provided in {{terms.lower}} URL parameter. So, it > should not reset TermsParams.TERMS_LIMIT parameter when generates a shard > query in createSmartShardQuery() method. > The patch is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12914) Solr crashes in /terms request handler
[ https://issues.apache.org/jira/browse/SOLR-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vadim Miller updated SOLR-12914: Description: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. was: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} __ exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. > Solr crashes in /terms request handler > -- > > Key: SOLR-12914 > URL: https://issues.apache.org/jira/browse/SOLR-12914 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.5 >Reporter: Vadim Miller >Priority: Major > Labels: terms > Attachments: terms.patch > > > TermsComponent class always tries to fetch all terms from all shards for a > further processing. There is {{java.lang.OutOfMemoryError}} exception if > the resulting list is too long. > There is a very common use case when the full terms list is not required: a > client needs to see next N terms in alphabetically sorted list starting with > a given value. Usually, this is needed for some autocomplete field on a page. > Example URL: > > {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} > > In this example TermsComponent needs to fetch only 50 terms from each shard > starting with a value provided in {{terms.lower}} URL parameter. So, it > should not reset TermsParams.TERMS_LIMIT parameter when generates a shard > query in createSmartShardQuery() method. > The patch is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12914) Solr crashes in /terms request handler
[ https://issues.apache.org/jira/browse/SOLR-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vadim Miller updated SOLR-12914: Description: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} __ exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. was: TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} __ exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. > Solr crashes in /terms request handler > -- > > Key: SOLR-12914 > URL: https://issues.apache.org/jira/browse/SOLR-12914 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.5 >Reporter: Vadim Miller >Priority: Major > Labels: terms > Attachments: terms.patch > > > TermsComponent class always tries to fetch all terms from all shards for a > further processing. There is {{java.lang.OutOfMemoryError}} __ exception if > the resulting list is too long. > There is a very common use case when the full terms list is not required: a > client needs to see next N terms in alphabetically sorted list starting with > a given value. Usually, this is needed for some autocomplete field on a page. > Example URL: > > {{[http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50]}} > > In this example TermsComponent needs to fetch only 50 terms from each shard > starting with a value provided in {{terms.lower}} URL parameter. So, it > should not reset TermsParams.TERMS_LIMIT parameter when generates a shard > query in createSmartShardQuery() method. > The patch is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12914) Solr crashes in /terms request handler
Vadim Miller created SOLR-12914: --- Summary: Solr crashes in /terms request handler Key: SOLR-12914 URL: https://issues.apache.org/jira/browse/SOLR-12914 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 7.5 Reporter: Vadim Miller TermsComponent class always tries to fetch all terms from all shards for a further processing. There is {{java.lang.OutOfMemoryError}} __ exception if the resulting list is too long. There is a very common use case when the full terms list is not required: a client needs to see next N terms in alphabetically sorted list starting with a given value. Usually, this is needed for some autocomplete field on a page. Example URL: {{http://localhost:8983/solr/mycollection/terms?terms.fl=fulltext=index=cat=50}} In this example TermsComponent needs to fetch only 50 terms from each shard starting with a value provided in {{terms.lower}} URL parameter. So, it should not reset TermsParams.TERMS_LIMIT parameter when generates a shard query in createSmartShardQuery() method. The patch is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12908) Add a default set of cluster preferences
[ https://issues.apache.org/jira/browse/SOLR-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662437#comment-16662437 ] Varun Thacker commented on SOLR-12908: -- I missed that! Sorry for the noise. It would be really nice if this was not in code and defined explicitly in the autoscaling.json file . What do you think? > Add a default set of cluster preferences > > > Key: SOLR-12908 > URL: https://issues.apache.org/jira/browse/SOLR-12908 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Varun Thacker >Priority: Major > > Similar to SOLR-12845 where we want to add a set of default cluster policies, > we should add some default cluster preferences as well > > We should always be trying to minimze cores , maximize freedisk for example. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1766 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1766/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/190/consoleText [repro] Revision: 3e89b7a771639aacaed6c21406624a2b27231dd7 [repro] Repro line: ant test -Dtestcase=QueryEqualityTest -Dtests.seed=7AE2C0271ACC0C42 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ru -Dtests.timezone=PLT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [repro] git fetch [repro] git checkout 3e89b7a771639aacaed6c21406624a2b27231dd7 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] QueryEqualityTest [repro] ant compile-test [...truncated 3423 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.QueryEqualityTest" -Dtests.showOutput=onerror -Dtests.seed=7AE2C0271ACC0C42 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ru -Dtests.timezone=PLT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1824 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 5/5 failed: org.apache.solr.search.QueryEqualityTest [repro] Re-testing 100% failures at the tip of master [repro] git fetch [repro] git checkout master [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 8 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] QueryEqualityTest [repro] ant compile-test [...truncated 3567 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.QueryEqualityTest" -Dtests.showOutput=onerror -Dtests.seed=7AE2C0271ACC0C42 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ru -Dtests.timezone=PLT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1823 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master: [repro] 5/5 failed: org.apache.solr.search.QueryEqualityTest [repro] Re-testing 100% failures at the tip of master without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] QueryEqualityTest [repro] ant compile-test [...truncated 3567 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.QueryEqualityTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ru -Dtests.timezone=PLT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1837 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master without a seed: [repro] 5/5 failed: org.apache.solr.search.QueryEqualityTest [repro] git checkout bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [...truncated 8 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12846) Policy rules do not support host variable
[ https://issues.apache.org/jira/browse/SOLR-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-12846. --- Resolution: Fixed > Policy rules do not support host variable > - > > Key: SOLR-12846 > URL: https://issues.apache.org/jira/browse/SOLR-12846 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: 7.5 >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul >Priority: Major > Fix For: 7.6, master (8.0) > > > The policy documentation says that there is a host variable supported in > policy rules at > https://lucene.apache.org/solr/guide/7_5/solrcloud-autoscaling-policy-preferences.html#node-selection-attributes > But there is no mention of it in the code. Perhaps it got lost during > refactorings and there were no tests for it? In any case, we should add it > back. It'd be great if we can support #EACH for host so that one can write a > rule to distribute replicas across hosts and not just nodes. This would be > very useful when one runs multiple Solr JVMs in the same physical node. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12913) Streaming Expressions / Math Expressions docs for 7.6 release
[ https://issues.apache.org/jira/browse/SOLR-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12913: -- Attachment: SOLR-12913.patch > Streaming Expressions / Math Expressions docs for 7.6 release > - > > Key: SOLR-12913 > URL: https://issues.apache.org/jira/browse/SOLR-12913 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-12913.patch > > > This ticket will add docs for new Streaming Expressions and Math Expressions > for Solr 7.6 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1765 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1765/ [...truncated 38 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/970/consoleText [repro] Revision: ffabbaf1f2a34a29dd9416cfd84fbfe93b7ad227 [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=raceConditionOnDeleteAndRegisterReplica -Dtests.seed=7F9063EAEFBD3359 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr -Dtests.timezone=Poland -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=7F9063EAEFBD3359 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr -Dtests.timezone=Poland -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [repro] git fetch [repro] git checkout ffabbaf1f2a34a29dd9416cfd84fbfe93b7ad227 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] DeleteReplicaTest [repro] ant compile-test [...truncated 3436 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.DeleteReplicaTest" -Dtests.showOutput=onerror -Dtests.seed=7F9063EAEFBD3359 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr -Dtests.timezone=Poland -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 11643 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.DeleteReplicaTest [repro] git checkout bd714ca1edc681cd871e6d385bd98a9cb9d7ffe6 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
[ https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662397#comment-16662397 ] Michael Gibney edited comment on LUCENE-8531 at 10/24/18 3:06 PM: -- Thanks [~thetaphi], and I agree that it would be a separate issue (^"Would it be worth opening a new issue to consider introducing the ability to specifically request construction of {{SpanNearQuery}} and/or {{inOrder=true}} behavior?"). I've created LUCENE-8543, so discussion can move there if anyone's interested. I also created LUCENE-8544, proposing the addition of support for {{(Multi)PhraseQuery}} phrase semantics in {{SpanNearQuery}}. I think it should be achievable, at least in the context of a proposed patch for [LUCENE-7398|https://issues.apache.org/jira/browse/LUCENE-7398?focusedCommentId=16630529#comment-16630529]. was (Author: mgibney): Thanks [~thetaphi], and I agree that it would be a separate issue (^"Would it be worth opening a new issue to consider introducing the ability to specifically request construction of {{SpanNearQuery}} and/or {{inOrder=true}} behavior?"). I've created LUCENE-8543, so discussion can move there if anyone's interested. I also created LUCENE-8544, proposing the addition of support for {{(Multi)PhraseQuery}} phrase semantics in {{SpanNearQuery}}. I think it should be achievable, at least in the context of a proposed patch for LUCENE-7398. > QueryBuilder hard-codes inOrder=true for generated sloppy span near queries > --- > > Key: LUCENE-8531 > URL: https://issues.apache.org/jira/browse/LUCENE-8531 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: LUCENE-8531.patch > > > QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in > phraseSlop, but hard-codes inOrder ctor param as true. > Before multi-term synonym support and graph token streams introduced the > possibility of generating SpanNearQuery-s, QueryBuilder generated > (Multi)PhraseQuery-s, which always interpret slop as allowing reordering > edits. Solr's eDismax query parser generates phrase queries when its > pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a > graph-aware synonym filter, SpanNearQuery-s are generated that require > clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits > are not allowed, so this is a kind of regression. See SOLR-12243 for edismax > pf/pf2/pf3 context. (Note that the patch on SOLR-12243 also addresses > another problem that blocks eDismax from generating queries *at all* under > the above-described circumstances.) > I propose adding a new analyzeGraphPhrase() method that allows configuration > of inOrder, which would allow eDismax to specify inOrder=false. The existing > analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so > existing client behavior would remain unchanged. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
[ https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662397#comment-16662397 ] Michael Gibney commented on LUCENE-8531: Thanks [~thetaphi], and I agree that it would be a separate issue (^"Would it be worth opening a new issue to consider introducing the ability to specifically request construction of {{SpanNearQuery}} and/or {{inOrder=true}} behavior?"). I've created LUCENE-8543, so discussion can move there if anyone's interested. I also created LUCENE-8544, proposing the addition of support for {{(Multi)PhraseQuery}} phrase semantics in {{SpanNearQuery}}. I think it should be achievable, at least in the context of a proposed patch for LUCENE-7398. > QueryBuilder hard-codes inOrder=true for generated sloppy span near queries > --- > > Key: LUCENE-8531 > URL: https://issues.apache.org/jira/browse/LUCENE-8531 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: LUCENE-8531.patch > > > QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in > phraseSlop, but hard-codes inOrder ctor param as true. > Before multi-term synonym support and graph token streams introduced the > possibility of generating SpanNearQuery-s, QueryBuilder generated > (Multi)PhraseQuery-s, which always interpret slop as allowing reordering > edits. Solr's eDismax query parser generates phrase queries when its > pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a > graph-aware synonym filter, SpanNearQuery-s are generated that require > clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits > are not allowed, so this is a kind of regression. See SOLR-12243 for edismax > pf/pf2/pf3 context. (Note that the patch on SOLR-12243 also addresses > another problem that blocks eDismax from generating queries *at all* under > the above-described circumstances.) > I propose adding a new analyzeGraphPhrase() method that allows configuration > of inOrder, which would allow eDismax to specify inOrder=false. The existing > analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so > existing client behavior would remain unchanged. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662398#comment-16662398 ] Lucene/Solr QA commented on SOLR-12839: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} Validate source patterns {color} | {color:red} 3m 47s{color} | {color:red} Validate source patterns validate-source-patterns failed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 56s{color} | {color:red} core in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.search.QueryEqualityTest | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12839 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12945318/SOLR-12839.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / e083b15 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 | | Default Java | 1.8.0_172 | | Validate source patterns | https://builds.apache.org/job/PreCommit-SOLR-Build/211/artifact/out/patch-validate-source-patterns-root.txt | | unit | https://builds.apache.org/job/PreCommit-SOLR-Build/211/artifact/out/patch-unit-solr_core.txt | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/211/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/211/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org