[jira] [Updated] (SOLR-11864) Create Collection API allows creating collection with trailing space
[ https://issues.apache.org/jira/browse/SOLR-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hem updated SOLR-11864: --- Summary: Create Collection API allows creating collection with trailing space (was: API inconsistency in create and delete collection) > Create Collection API allows creating collection with trailing space > > > Key: SOLR-11864 > URL: https://issues.apache.org/jira/browse/SOLR-11864 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: v2 API >Affects Versions: 5.3.1 >Reporter: Hem >Priority: Major > > When i create a collection through java using the client jar, if there is a > space following the collection name, the collection gets created. > But when i try to delete the same collection with the space through delete > API from the browser I get the error of collection not found. > Steps to reproduce: > # Create a collection through client with collection name having a space. > # Try to delete it using collection delete API through browser > Another issue is the collection name with space at the end, the shards are > always in recovery state and after some time shifts to degraded state. > The API should be able to delete the collection. And if collection name does > not require the space then there should be an internal trim or a validation > error thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+37) - Build # 1196 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1196/ Java: 64bit/jdk-10-ea+37 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 14361 lines...] [junit4] JVM J0: stdout was not empty, see: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J0-20180118_062754_18310491007905767235260.sysout [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] java.lang.OutOfMemoryError: Java heap space [junit4] Dumping heap to /home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps/java_pid4014.hprof ... [junit4] Heap dump file created [544241760 bytes in 1.272 secs] [junit4] <<< JVM J0: EOF [...truncated 8577 lines...] BUILD FAILED /home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:836: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:788: Some of the tests produced a heap dump, but did not fail. Maybe a suppressed OutOfMemoryError? Dumps created: * java_pid4014.hprof Total time: 99 minutes 18 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 406 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/406/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.ltr.feature.TestUserTermScoreWithQ Error Message: 1 thread leaked from SUITE scope at org.apache.solr.ltr.feature.TestUserTermScoreWithQ: 1) Thread[id=89, name=qtp531507010-89, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.ltr.feature.TestUserTermScoreWithQ: 1) Thread[id=89, name=qtp531507010-89, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([DFE82B7B342E5B03]:0) FAILED: junit.framework.TestSuite.org.apache.solr.ltr.feature.TestUserTermScoreWithQ Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=89, name=qtp531507010-89, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=89, name=qtp531507010-89, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([DFE82B7B342E5B03]:0) Build Log: [...truncated 21610 lines...] [junit4] Suite: org.apache.solr.ltr.feature.TestUserTermScoreWithQ [junit4] 2> Creating dataDir: /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/contrib/solr-ltr/test/J0/temp/solr.ltr.feature.TestUserTermScoreWithQ_DFE82B7B342E5B03-001/init-core-data-001 [junit4] 2> 23955 WARN (SUITE-TestUserTermScoreWithQ-seed#[DFE82B7B342E5B03]-worker) [] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=3 numCloses=3 [junit4] 2> 23955 INFO (SUITE-TestUserTermScoreWithQ-seed#[DFE82B7B342E5B03]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true [junit4] 2> 23960 INFO
[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost
[ https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330072#comment-16330072 ] Shalin Shekhar Mangar commented on SOLR-10697: -- +1 for bumping up maxConnectionsPerHost for HttpShardHandlerFactory to be the same as the one for UpdateShardHandler. This becomes a bottleneck under heavy query load and leads to all sorts of difficult to troubleshoot problems such slow-ness and deadlock. > Improve defaults for maxConnectionsPerHost > -- > > Key: SOLR-10697 > URL: https://issues.apache.org/jira/browse/SOLR-10697 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Minor > > Twice recently I've increased > {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped > improve query latencies a lot. > Should we increase the default to say 100 ? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21292 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21292/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.component.SpatialHeatmapFacetsTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.handler.component.SpatialHeatmapFacetsTest: 1) Thread[id=1523, name=qtp609466789-1523, state=TIMED_WAITING, group=TGRP-SpatialHeatmapFacetsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.handler.component.SpatialHeatmapFacetsTest: 1) Thread[id=1523, name=qtp609466789-1523, state=TIMED_WAITING, group=TGRP-SpatialHeatmapFacetsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([F1E5AB317FCE552E]:0) FAILED: junit.framework.TestSuite.org.apache.solr.handler.component.SpatialHeatmapFacetsTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1523, name=qtp609466789-1523, state=TIMED_WAITING, group=TGRP-SpatialHeatmapFacetsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=1523, name=qtp609466789-1523, state=TIMED_WAITING, group=TGRP-SpatialHeatmapFacetsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([F1E5AB317FCE552E]:0) Build Log: [...truncated 11736 lines...] [junit4] Suite: org.apache.solr.handler.component.SpatialHeatmapFacetsTest [junit4] 2> 84409 INFO (SUITE-SpatialHeatmapFacetsTest-seed#[F1E5AB317FCE552E]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.component.SpatialHeatmapFacetsTest_F1E5AB317FCE552E-001/init-core-data-001 [junit4] 2> 84409 WARN (SUITE-SpatialHeatmapFacetsTest-seed#[F1E5AB317FCE552E]-worker) []
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4392 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4392/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 10 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest: 1) Thread[id=1262, name=qtp1610573316-1262, state=TIMED_WAITING, group=TGRP-LegacyNoFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest: 1) Thread[id=1262, name=qtp1610573316-1262, state=TIMED_WAITING, group=TGRP-LegacyNoFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([4D7B84631397BDC4]:0) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1262, name=qtp1610573316-1262, state=TIMED_WAITING, group=TGRP-LegacyNoFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=1262, name=qtp1610573316-1262, state=TIMED_WAITING, group=TGRP-LegacyNoFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([4D7B84631397BDC4]:0) FAILED: junit.framework.TestSuite.org.apache.solr.ltr.TestLTRWithFacet Error Message: 1 thread leaked from SUITE scope at org.apache.solr.ltr.TestLTRWithFacet: 1) Thread[id=317, name=qtp2055074640-317, state=TIMED_WAITING, group=TGRP-TestLTRWithFacet] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 407 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/407/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC 8 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_40FB7B1974C80F-001\3.4.0-cfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_40FB7B1974C80F-001\3.4.0-cfs-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_40FB7B1974C80F-001\3.4.0-cfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_40FB7B1974C80F-001\3.4.0-cfs-001 at __randomizedtesting.SeedInfo.seed([40FB7B1974C80F]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_D23E528CA48F5B6E-001\testDirectoryFilter-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_D23E528CA48F5B6E-001\testDirectoryFilter-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_D23E528CA48F5B6E-001\testDirectoryFilter-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_D23E528CA48F5B6E-001\testDirectoryFilter-001 at __randomizedtesting.SeedInfo.seed([D23E528CA48F5B6E]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.ValueFacetTest Error Message: Could not remove the following files (in the order of attempts):
[JENKINS] Lucene-Solr-Tests-master - Build # 2263 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2263/ 7 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.TestHighlightDedupGrouping Error Message: 1 thread leaked from SUITE scope at org.apache.solr.TestHighlightDedupGrouping: 1) Thread[id=13780, name=qtp1195393389-13780, state=TIMED_WAITING, group=TGRP-TestHighlightDedupGrouping] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.TestHighlightDedupGrouping: 1) Thread[id=13780, name=qtp1195393389-13780, state=TIMED_WAITING, group=TGRP-TestHighlightDedupGrouping] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([C49CA55F857A8729]:0) FAILED: junit.framework.TestSuite.org.apache.solr.TestHighlightDedupGrouping Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=13780, name=qtp1195393389-13780, state=TIMED_WAITING, group=TGRP-TestHighlightDedupGrouping] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=13780, name=qtp1195393389-13780, state=TIMED_WAITING, group=TGRP-TestHighlightDedupGrouping] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([C49CA55F857A8729]:0) FAILED: org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test Error Message: KeeperErrorCode = Session expired for /clusterstate.json Stack Trace: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /clusterstate.json at __randomizedtesting.SeedInfo.seed([C49CA55F857A8729:4CC89A852B86EAD1]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1212) at org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:339) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
[JENKINS] Lucene-Solr-Tests-7.x - Build # 319 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/319/ 6 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test Error Message: Timed out waiting for replica core_node54 (1516235137951) to replicate from leader core_node46 (0) Stack Trace: java.lang.AssertionError: Timed out waiting for replica core_node54 (1516235137951) to replicate from leader core_node46 (0) at __randomizedtesting.SeedInfo.seed([4384683B4A8E81CC:CBD057E1E472EC34]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForReplicationFromReplicas(AbstractFullDistribZkTestBase.java:2143) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+37) - Build # 21291 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21291/ Java: 64bit/jdk-10-ea+37 -XX:+UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.cloud.AddReplicaTest.test Error Message: core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"https://127.0.0.1:34889/solr","node_name":"127.0.0.1:34889_solr","state":"active","type":"NRT"} Stack Trace: java.lang.AssertionError: core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"https://127.0.0.1:34889/solr","node_name":"127.0.0.1:34889_solr","state":"active","type":"NRT"} at __randomizedtesting.SeedInfo.seed([4DC117C5F39006AD:C595281F5D6C6B55]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.AddReplicaTest.test(AddReplicaTest.java:84) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 398 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/398/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=9956, name=jetty-launcher-1514-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=9952, name=jetty-launcher-1514-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=9956, name=jetty-launcher-1514-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7120 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7120/ Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC No tests ran. Build Log: [...truncated 11 lines...] FATAL: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf java.io.IOException: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:197) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.api.CleanCommand.cleanPath(CleanCommand.java:176) at org.eclipse.jgit.api.CleanCommand.call(CleanCommand.java:133) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to Windows VBOX at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1696) at hudson.remoting.UserResponse.retrieve(UserRequest.java:313) at hudson.remoting.Channel.call(Channel.java:909) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281) at com.sun.proxy.$Proxy80.clean(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.clean(RemoteGitImpl.java:450) at hudson.plugins.git.extensions.impl.CleanBeforeCheckout.decorateFetchCommand(CleanBeforeCheckout.java:30) at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:858) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160) at hudson.scm.SCM.checkout(SCM.java:495) at hudson.model.AbstractProject.checkout(AbstractProject.java:1203) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499) at hudson.model.Run.execute(Run.java:1727) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) Caused: org.eclipse.jgit.api.errors.JGitInternalException: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf at org.eclipse.jgit.api.CleanCommand.call(CleanCommand.java:136) at org.jenkinsci.plugins.gitclient.JGitAPIImpl.clean(JGitAPIImpl.java:1290) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853) at hudson.remoting.UserRequest.perform(UserRequest.java:210) at hudson.remoting.UserRequest.perform(UserRequest.java:53) at hudson.remoting.Request$2.run(Request.java:358) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Email
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1194 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1194/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at https://127.0.0.1:44823/solr/awhollynewcollection_0_shard3_replica_n4: ClusterState says we are the leader (https://127.0.0.1:44823/solr/awhollynewcollection_0_shard3_replica_n4), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:44823/solr/awhollynewcollection_0_shard3_replica_n4: ClusterState says we are the leader (https://127.0.0.1:44823/solr/awhollynewcollection_0_shard3_replica_n4), but locally we don't think so. Request came from null at __randomizedtesting.SeedInfo.seed([2CE134CEA9AE2639:6494407AAF9D09AC]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:462) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-11617) Expose Alias Metadata CRUD in REST API
[ https://issues.apache.org/jira/browse/SOLR-11617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329672#comment-16329672 ] Gus Heck commented on SOLR-11617: - * I don't really want to create two ways to modify aliases (and 2 places to maintain the functionality) I'll leave MODIFYALIAS as is unless additional opinions surface. * I don't think we accept whitespace for collection names etc and I think that it's not very friendly to make whitespace significant in general. I'm generally thinking that metadata keys should be trimmed. It would be a very strange use case to want to be able to set values for keys like ' foo', 'foo ' and ' foo '.. and much more common for such a thing to result in confusing debugging sessions. for values, it feel a bit trappy to set a value of ' ' instead of deleting if the person issuing the command accidentally appends a space... However, I suppose there's some possibility that someone might want to keep a metadata property that contained a delimiter and have that delimiter be whitespace... for values its a trade off i guess. I could go either way. * multi-properties, yeah that would be good. * Linked Hash Map (/) > Expose Alias Metadata CRUD in REST API > -- > > Key: SOLR-11617 > URL: https://issues.apache.org/jira/browse/SOLR-11617 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (8.0) >Reporter: Gus Heck >Priority: Major > Attachments: SOLR_11617.patch, SOLR_11617.patch > > > SOLR-11487 is adding Java API for metadata on aliases, this task is to expose > that functionality to end-users via a REST API. > Some proposed commands, for initial discussion: > - *SETALIASMETA* - upsert, or delete if blank/null/white-space provided. > - *GETALIASMETA* - read existing alias metadata > Given that the parent ticket to this task is going to rely on the alias > metadata, and I suspect a user would potentially completely break their time > partitioned data configuration by editing system metadata directly, we should > either document these commands as "use at your own risk, great > power/responsibility etc" or consider protecting some subset of metadata. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11868) CloudSolrClient.setIdField is confusing, it's really the routing field. Should be deprecated.
[ https://issues.apache.org/jira/browse/SOLR-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329651#comment-16329651 ] Erick Erickson commented on SOLR-11868: --- Possibly related to these two JIRAs. David's comment that testing the route field is rarely done is worrisome. It's at least worth looking at those two JIRAs for hints, but I suspect they're tangentially related at best, and _probably_ this Jira can be fixed independently of those other two. > CloudSolrClient.setIdField is confusing, it's really the routing field. > Should be deprecated. > - > > Key: SOLR-11868 > URL: https://issues.apache.org/jira/browse/SOLR-11868 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.2 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > IIUC idField has nothing to do with the field. It's really > the field used to route documents. Agreed, this is often the "id" > field, but still > In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" > field to router.getTargetSlice() and just works, even though > getTargetSlice is clearly designed to route on a field other than the > if we didn't just pass null as the "route" param. > The confusing bit is that if I have a route field defined for my > collection and want to use CloudSolrClient I have to figure out that I > need to use the setIdField method to use that field for routing. > > We should deprecate setIdField and refactor how this is used (i.e. > getRoutes). Need to beef up tests too I suspect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: CloudSolrClient.idField seems confused.
SOLR-11868 On Wed, Jan 17, 2018 at 2:33 PM, David Smileywrote: > Pretty confusing indeed; I think I bumped into this. It's worth a JIRA. > > BTW this is semi-related perhaps: > https://issues.apache.org/jira/browse/SOLR-8889 > > On Wed, Jan 17, 2018 at 4:08 PM Erick Erickson > wrote: >> >> IIUC idField has nothing to do with the field. It's really >> the field used to route documents. Agreed, this is often the "id" >> field, but still >> >> In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" >> field to router.getTargetSlice() and just works, even though >> getTargetSlice is clearly designed to route on a field other than the >> if we didn't just pass null as the "route" param. >> >> The confusing bit is that if I have a route field defined for my >> collection and want to use CloudSolrClient I have to figure out that I >> need to use the setIdField method to use that field for routing. >> >> Worth a JIRA? >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > > > -- > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker > LinkedIn: http://linkedin.com/in/davidwsmiley | Book: > http://www.solrenterprisesearchserver.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21290 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21290/ Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([CD8AB419EAB18A84]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:379) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792) at org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([CD8AB419EAB18A84]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:379) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288) at jdk.internal.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Created] (SOLR-11868) CloudSolrClient.setIdField is confusing, it's really the routing field. Should be deprecated.
Erick Erickson created SOLR-11868: - Summary: CloudSolrClient.setIdField is confusing, it's really the routing field. Should be deprecated. Key: SOLR-11868 URL: https://issues.apache.org/jira/browse/SOLR-11868 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 7.2 Reporter: Erick Erickson Assignee: Erick Erickson IIUC idField has nothing to do with the field. It's really the field used to route documents. Agreed, this is often the "id" field, but still In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" field to router.getTargetSlice() and just works, even though getTargetSlice is clearly designed to route on a field other than the if we didn't just pass null as the "route" param. The confusing bit is that if I have a route field defined for my collection and want to use CloudSolrClient I have to figure out that I need to use the setIdField method to use that field for routing. We should deprecate setIdField and refactor how this is used (i.e. getRoutes). Need to beef up tests too I suspect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: CloudSolrClient.idField seems confused.
Pretty confusing indeed; I think I bumped into this. It's worth a JIRA. BTW this is semi-related perhaps: https://issues.apache.org/jira/browse/SOLR-8889 On Wed, Jan 17, 2018 at 4:08 PM Erick Ericksonwrote: > IIUC idField has nothing to do with the field. It's really > the field used to route documents. Agreed, this is often the "id" > field, but still > > In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" > field to router.getTargetSlice() and just works, even though > getTargetSlice is clearly designed to route on a field other than the > if we didn't just pass null as the "route" param. > > The confusing bit is that if I have a route field defined for my > collection and want to use CloudSolrClient I have to figure out that I > need to use the setIdField method to use that field for routing. > > Worth a JIRA? > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1631 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1631/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.SelectWithEvaluatorsTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.SelectWithEvaluatorsTest: 1) Thread[id=1155, name=qtp1356365121-1155, state=TIMED_WAITING, group=TGRP-SelectWithEvaluatorsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.SelectWithEvaluatorsTest: 1) Thread[id=1155, name=qtp1356365121-1155, state=TIMED_WAITING, group=TGRP-SelectWithEvaluatorsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([FCA7A272F2AF1622]:0) FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.SelectWithEvaluatorsTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1155, name=qtp1356365121-1155, state=TIMED_WAITING, group=TGRP-SelectWithEvaluatorsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=1155, name=qtp1356365121-1155, state=TIMED_WAITING, group=TGRP-SelectWithEvaluatorsTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([FCA7A272F2AF1622]:0) FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Expected to find shardAddress in the up shard info: {error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle this request exceeded at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460) at org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273) at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175) at
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1193 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1193/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest: 1) Thread[id=211, name=qtp16212445-211, state=TIMED_WAITING, group=TGRP-LegacyQueryFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest: 1) Thread[id=211, name=qtp16212445-211, state=TIMED_WAITING, group=TGRP-LegacyQueryFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([4EAD7E7BDD1CFD56]:0) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=211, name=qtp16212445-211, state=TIMED_WAITING, group=TGRP-LegacyQueryFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=211, name=qtp16212445-211, state=TIMED_WAITING, group=TGRP-LegacyQueryFacetCloudTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([4EAD7E7BDD1CFD56]:0) Build Log: [...truncated 17416 lines...] [junit4] Suite: org.apache.solr.analytics.legacy.facet.LegacyQueryFacetCloudTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/contrib/solr-analytics/test/J1/temp/solr.analytics.legacy.facet.LegacyQueryFacetCloudTest_4EAD7E7BDD1CFD56-001/init-core-data-001 [junit4] 2> Jan 17, 2018 9:14:17 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 1 leaked thread(s). [junit4] 2> Jan 17, 2018 9:14:37 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4]
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329491#comment-16329491 ] Cassandra Targett commented on SOLR-11766: -- Re the PDF {quote}Do we see ourselves continuing to support both formats for the foreseeable future? {quote} There are reasons why we still not only have it, but consider it the official Ref Guide format, and those would need to change: # The Ref Guide is still not finished at the same time as the code for any particular release (almost, but not quite). # Assuming it was ready at the same time, there's an ASF policy that release artifacts need to be produced by the Release Manager, on a machine he/she has direct control over. We would need everyone who might be an RM to set themselves up to build the HTML, which requires a number of unique dependencies and some have had trouble with it in the past. The PDF has no external dependencies, so is simple for anyone to run. I'm pretty sure I'm forgetting a couple of other reasons. I think, though, that even if we decided that the HTML version is the official format, some people would still want a PDF version (maybe just as a backup in case their networks is down), and we'd want to make sure that whatever we do with the content, those users would not lose anything because of the format. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, > Streaming-expanded-panel.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9168) Add availability to specify own oom handing script
[ https://issues.apache.org/jira/browse/SOLR-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329490#comment-16329490 ] Shawn Heisey commented on SOLR-9168: I'd be curious as to exactly what happens and what gets logged if those new options in 8u92 are used. If both the "exit" option and the existing option are present, are both actions taken, or does one option override the other? If there's no record of the action that Java has taken with the exit option, then I think we should stick with what we have. > Add availability to specify own oom handing script > -- > > Key: SOLR-9168 > URL: https://issues.apache.org/jira/browse/SOLR-9168 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Affects Versions: 5.5.1 >Reporter: AngryDeveloper >Priority: Major > Labels: oom > Fix For: 5.5.1 > > Attachments: > 0001-SOLR-9168-Allow-users-to-specify-their-own-OnOutOfMe.patch, > SOLR-9168-userdefined.patch, SOLR-9168.patch > > > Right now the start script always uses $SOLR_TIP/bin/oom_solr.sh to handle > OutOfMemoryException. This script only kills instance of solr. > We need to do some additional things (e.g sent mail about this exception) > What do you think about adding possibility to set up own script? > Proposition: > {code} > if [ -z "$SOLR_OOM_SCRIPT" ]; then > SOLR_OOM_SCRIPT=$SOLR_TIP/bin/oom_solr.sh > fi > [...] > nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS \ > "-XX:OnOutOfMemoryError=$SOLR_OOM_SCRIPT $SOLR_PORT $SOLR_LOGS_DIR" \ > -jar start.jar "${SOLR_JETTY_CONFIG[@]}" \ > 1>"$SOLR_LOGS_DIR/solr-$SOLR_PORT-console.log" 2>&1 & echo $! > > "$SOLR_PID_DIR/solr-$SOLR_PORT.pid" > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
CloudSolrClient.idField seems confused.
IIUC idField has nothing to do with the field. It's really the field used to route documents. Agreed, this is often the "id" field, but still In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id" field to router.getTargetSlice() and just works, even though getTargetSlice is clearly designed to route on a field other than the if we didn't just pass null as the "route" param. The confusing bit is that if I have a route field defined for my collection and want to use CloudSolrClient I have to figure out that I need to use the setIdField method to use that field for routing. Worth a JIRA? - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
[ https://issues.apache.org/jira/browse/SOLR-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329450#comment-16329450 ] ASF subversion and git services commented on SOLR-11867: Commit c9f524ada9cd2c62e60259313315b6df398fa91c in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c9f524a ] SOLR-11867: Add indexOf, rowCount and columnCount StreamEvaluators > Add indexOf, rowCount and columnCount StreamEvaluators > -- > > Key: SOLR-11867 > URL: https://issues.apache.org/jira/browse/SOLR-11867 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11867.patch > > > This ticket adds three Stream Evaluators: > indexOf : Returns the index of a value in an array. > rowCount: Returns the number of rows in a matrix > columnCount: Returns the number of columns in a matrix > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
[ https://issues.apache.org/jira/browse/SOLR-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329440#comment-16329440 ] ASF subversion and git services commented on SOLR-11867: Commit f491fad955fc7442be99f2c44724a9c631fd638b in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f491fad ] SOLR-11867: Add indexOf, rowCount and columnCount StreamEvaluators > Add indexOf, rowCount and columnCount StreamEvaluators > -- > > Key: SOLR-11867 > URL: https://issues.apache.org/jira/browse/SOLR-11867 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11867.patch > > > This ticket adds three Stream Evaluators: > indexOf : Returns the index of a value in an array. > rowCount: Returns the number of rows in a matrix > columnCount: Returns the number of columns in a matrix > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9168) Add availability to specify own oom handing script
[ https://issues.apache.org/jira/browse/SOLR-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329438#comment-16329438 ] Erick Erickson commented on SOLR-9168: -- Waking this up after a while As of Java 8u92, there's a couple of new options: - *ExitOnOutOfMemoryError* - *CrashOnOutOfMemoryError* See: [https://bugs.openjdk.java.net/browse/JDK-8152669] What do people think about changing how this works? It would require requiring at least u92 for it all to work, but since we're up to 151 I don't think that's onerous. If it's an earlier version the option will just be ignored anyway. The proposal then becomes: 1> Enable ExitOnOutOfMemoryError by default 2> Go ahead and leave oom_solr.sh as the default file, just make it a placeholder (comment-only) file. People can add whatever they want in there as one option. 3> It's still a good idea to let the script used be specified (optionally) by an environment variable, for instance when I'm troubleshooting I could easily want to invoke different oom scripts on different runs. 4> Go ahead and add both options to foreground too. If users go in and take out the ExitOnOutOfMemoryError they can, although we'll _strongly_ discourage that, Shawn's comment is well taken. This hinges on our willingness to require 8u92, although this option would probably be ignored if run on earlier versions. Opinions? > Add availability to specify own oom handing script > -- > > Key: SOLR-9168 > URL: https://issues.apache.org/jira/browse/SOLR-9168 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Affects Versions: 5.5.1 >Reporter: AngryDeveloper >Priority: Major > Labels: oom > Fix For: 5.5.1 > > Attachments: > 0001-SOLR-9168-Allow-users-to-specify-their-own-OnOutOfMe.patch, > SOLR-9168-userdefined.patch, SOLR-9168.patch > > > Right now the start script always uses $SOLR_TIP/bin/oom_solr.sh to handle > OutOfMemoryException. This script only kills instance of solr. > We need to do some additional things (e.g sent mail about this exception) > What do you think about adding possibility to set up own script? > Proposition: > {code} > if [ -z "$SOLR_OOM_SCRIPT" ]; then > SOLR_OOM_SCRIPT=$SOLR_TIP/bin/oom_solr.sh > fi > [...] > nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS \ > "-XX:OnOutOfMemoryError=$SOLR_OOM_SCRIPT $SOLR_PORT $SOLR_LOGS_DIR" \ > -jar start.jar "${SOLR_JETTY_CONFIG[@]}" \ > 1>"$SOLR_LOGS_DIR/solr-$SOLR_PORT-console.log" 2>&1 & echo $! > > "$SOLR_PID_DIR/solr-$SOLR_PORT.pid" > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
[ https://issues.apache.org/jira/browse/SOLR-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11867: -- Attachment: SOLR-11867.patch > Add indexOf, rowCount and columnCount StreamEvaluators > -- > > Key: SOLR-11867 > URL: https://issues.apache.org/jira/browse/SOLR-11867 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11867.patch > > > This ticket adds three Stream Evaluators: > indexOf : Returns the index of a value in an array. > rowCount: Returns the number of rows in a matrix > columnCount: Returns the number of columns in a matrix > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
Joel Bernstein created SOLR-11867: - Summary: Add indexOf, rowCount and columnCount StreamEvaluators Key: SOLR-11867 URL: https://issues.apache.org/jira/browse/SOLR-11867 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket adds three Stream Evaluators: indexOf : Returns the index of a value in an array. rowCount: Returns the number of rows in a matrix columnCount: Returns the number of columns in a matrix -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
[ https://issues.apache.org/jira/browse/SOLR-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11867: -- Fix Version/s: 7.3 > Add indexOf, rowCount and columnCount StreamEvaluators > -- > > Key: SOLR-11867 > URL: https://issues.apache.org/jira/browse/SOLR-11867 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.3 > > > This ticket adds three Stream Evaluators: > indexOf : Returns the index of a value in an array. > rowCount: Returns the number of rows in a matrix > columnCount: Returns the number of columns in a matrix > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11867) Add indexOf, rowCount and columnCount StreamEvaluators
[ https://issues.apache.org/jira/browse/SOLR-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-11867: - Assignee: Joel Bernstein > Add indexOf, rowCount and columnCount StreamEvaluators > -- > > Key: SOLR-11867 > URL: https://issues.apache.org/jira/browse/SOLR-11867 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.3 > > > This ticket adds three Stream Evaluators: > indexOf : Returns the index of a value in an array. > rowCount: Returns the number of rows in a matrix > columnCount: Returns the number of columns in a matrix > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-9272: --- Attachment: SOLR-9272.patch > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329385#comment-16329385 ] Amrit Sarkar commented on SOLR-9272: Patch uploaded. Though I am not able to test the commands on windows machine, solr.cmd, but I followed the notions. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 405 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/405/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.QueryFacetTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.analytics.facet.QueryFacetTest: 1) Thread[id=55, name=qtp1325675031-55, state=TIMED_WAITING, group=TGRP-QueryFacetTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.base@9/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.analytics.facet.QueryFacetTest: 1) Thread[id=55, name=qtp1325675031-55, state=TIMED_WAITING, group=TGRP-QueryFacetTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.base@9/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([EBF9C177F7DC81AA]:0) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.QueryFacetTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=55, name=qtp1325675031-55, state=TIMED_WAITING, group=TGRP-QueryFacetTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.base@9/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=55, name=qtp1325675031-55, state=TIMED_WAITING, group=TGRP-QueryFacetTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.base@9/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([EBF9C177F7DC81AA]:0) Build Log: [...truncated 17335 lines...] [junit4] Suite: org.apache.solr.analytics.facet.QueryFacetTest [junit4] 2> Creating dataDir: /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/contrib/solr-analytics/test/J0/temp/solr.analytics.facet.QueryFacetTest_EBF9C177F7DC81AA-001/init-core-data-001 [junit4] 2> log4j:WARN No appenders could be found for logger (org.apache.solr.SolrTestCaseJ4). [junit4] 2> log4j:WARN Please initialize the
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329374#comment-16329374 ] Cassandra Targett edited comment on SOLR-11766 at 1/17/18 8:09 PM: --- {quote}A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. {quote} I've thought of this approach also, but wasn't sure how well it would work (and needed to figure out how to make it work). I've attached a couple of screenshots of what it might look like (needs more styling fixes). One has 2 panels collapsed and the other has one of the panels open. The idea of grouping by category is in there also, in the sense that these are grouped under the same main heading (I realize, of course, these may not logically go together, just trying to convey the idea). was (Author: ctargett): {quote}A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. {quote} I've thought of this approach also, but wasn't sure how well it would work (and needed to figure out how to make it work). I've attached a couple of screenshots of what it might look like (needs more styling fixes). One has 2 panels collapsed and the other has one of the panels open. The idea of grouping by category is in there also, in the sense that these are grouped under the same main heading (I realize, of course, these may not logically go together, just trying to convey the idea). !Stream-collapsed-panels.png! > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, > Streaming-expanded-panel.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett updated SOLR-11766: - Attachment: Streaming-expanded-panel.png Stream-collapsed-panels.png > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, > Streaming-expanded-panel.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329374#comment-16329374 ] Cassandra Targett commented on SOLR-11766: -- {quote}A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. {quote} I've thought of this approach also, but wasn't sure how well it would work (and needed to figure out how to make it work). I've attached a couple of screenshots of what it might look like (needs more styling fixes). One has 2 panels collapsed and the other has one of the panels open. The idea of grouping by category is in there also, in the sense that these are grouped under the same main heading (I realize, of course, these may not logically go together, just trying to convey the idea). !Stream-collapsed-panels.png! > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, > Streaming-expanded-panel.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329336#comment-16329336 ] ASF subversion and git services commented on SOLR-11834: Commit edb59ae49b236fd3e368c030ca290ac9b57b2dcb in lucene-solr's branch refs/heads/branch_7x from [~mkhludnev] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=edb59ae ] SOLR-11834: ref-guide: [subquery] doesn't need top level fl to repeat subq.fl > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch, SOLR-11834.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329331#comment-16329331 ] ASF subversion and git services commented on SOLR-11834: Commit 42832f8839785eb9abefe8eba65a236360eec5e1 in lucene-solr's branch refs/heads/master from [~mkhludnev] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42832f8 ] SOLR-11834: ref-guide: [subquery] doesn't need top level fl to repeat subq.fl > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch, SOLR-11834.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329320#comment-16329320 ] Mikhail Khludnev commented on SOLR-11834: - !SOLR-11834.png! fixed space. > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch, SOLR-11834.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11834: Attachment: SOLR-11834.png > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch, SOLR-11834.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
NoSuchMethod error for recent pulls.
java.lang.NoSuchMethodException:org.eclipse.jetty.server.ServerConnector.setSelectorPriorityDelta This is a result of SOLR-11810, curiously we never actually start Solr as part of any unit tests so this one slipped through the cracks. There was a config in jetty-http.xml that triggered this, has been deprecated for a long while so just remove it. Fixed, sorry for the noise. Erick - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11810) Upgrade Jetty to 9.4.8
[ https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329283#comment-16329283 ] ASF subversion and git services commented on SOLR-11810: Commit 777b75c95bb762b72f982f3ebb1c72725db5de33 in lucene-solr's branch refs/heads/branch_7x from Erick Erickson [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=777b75c ] SOLR-11810: Upgrade Jetty to 9.4.8 (cherry picked from commit 2900bb5) > Upgrade Jetty to 9.4.8 > -- > > Key: SOLR-11810 > URL: https://issues.apache.org/jira/browse/SOLR-11810 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Erick Erickson >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11801.jetty-conf.patch, SOLR-11810.patch, > SOLR-11810.patch, SOLR-11810.patch, SOLR-11810.patch > > > Jetty 9.4.x was released over a year back : > https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html . Solr > doesn't use any of the major improvements listed on the announce thread but > it's the version that's in active development. > We should upgrade to Jetty 9.4.x series from 9.3.x > The latest version right now is 9.4.8.v20171121 . Upgrading it locally > required a few compile time changes only. > Under "Default Sessions" in > https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0 > it states that "In previous versions of Jetty this was referred to as > "hash" session management." . > The patch fixes all the compile time issues. > Currently two tests are failing: > TestRestManager > TestManagedSynonymGraphFilterFactory > Steps to upgrade the Jetty version were : > 1. Modify {{ivy-versions.properties}} to reflect the new version number > 2. Run {{ant jar-checksums}} to generate new JAR checksums -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11810) Upgrade Jetty to 9.4.8
[ https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329285#comment-16329285 ] Erick Erickson commented on SOLR-11810: --- Patch fixing the Jetty deprecated warning. This setting in jetty-http.xml and jetty-https.xml has been a no-op for a long time so it was just some cruft left over. Sorry for the noise. [~steve_rowe] also moved the note in CHANGES.txt. > Upgrade Jetty to 9.4.8 > -- > > Key: SOLR-11810 > URL: https://issues.apache.org/jira/browse/SOLR-11810 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Erick Erickson >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11801.jetty-conf.patch, SOLR-11810.patch, > SOLR-11810.patch, SOLR-11810.patch, SOLR-11810.patch > > > Jetty 9.4.x was released over a year back : > https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html . Solr > doesn't use any of the major improvements listed on the announce thread but > it's the version that's in active development. > We should upgrade to Jetty 9.4.x series from 9.3.x > The latest version right now is 9.4.8.v20171121 . Upgrading it locally > required a few compile time changes only. > Under "Default Sessions" in > https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0 > it states that "In previous versions of Jetty this was referred to as > "hash" session management." . > The patch fixes all the compile time issues. > Currently two tests are failing: > TestRestManager > TestManagedSynonymGraphFilterFactory > Steps to upgrade the Jetty version were : > 1. Modify {{ivy-versions.properties}} to reflect the new version number > 2. Run {{ant jar-checksums}} to generate new JAR checksums -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11810) Upgrade Jetty to 9.4.8
[ https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329277#comment-16329277 ] ASF subversion and git services commented on SOLR-11810: Commit 2900bb597db4e312fbfe828a77ba11026866ae86 in lucene-solr's branch refs/heads/master from Erick Erickson [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2900bb5 ] SOLR-11810: Upgrade Jetty to 9.4.8 > Upgrade Jetty to 9.4.8 > -- > > Key: SOLR-11810 > URL: https://issues.apache.org/jira/browse/SOLR-11810 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Erick Erickson >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11801.jetty-conf.patch, SOLR-11810.patch, > SOLR-11810.patch, SOLR-11810.patch, SOLR-11810.patch > > > Jetty 9.4.x was released over a year back : > https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html . Solr > doesn't use any of the major improvements listed on the announce thread but > it's the version that's in active development. > We should upgrade to Jetty 9.4.x series from 9.3.x > The latest version right now is 9.4.8.v20171121 . Upgrading it locally > required a few compile time changes only. > Under "Default Sessions" in > https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0 > it states that "In previous versions of Jetty this was referred to as > "hash" session management." . > The patch fixes all the compile time issues. > Currently two tests are failing: > TestRestManager > TestManagedSynonymGraphFilterFactory > Steps to upgrade the Jetty version were : > 1. Modify {{ivy-versions.properties}} to reflect the new version number > 2. Run {{ant jar-checksums}} to generate new JAR checksums -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11810) Upgrade Jetty to 9.4.8
[ https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-11810: -- Attachment: SOLR-11801.jetty-conf.patch > Upgrade Jetty to 9.4.8 > -- > > Key: SOLR-11810 > URL: https://issues.apache.org/jira/browse/SOLR-11810 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Erick Erickson >Priority: Major > Fix For: 7.3 > > Attachments: SOLR-11801.jetty-conf.patch, SOLR-11810.patch, > SOLR-11810.patch, SOLR-11810.patch, SOLR-11810.patch > > > Jetty 9.4.x was released over a year back : > https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html . Solr > doesn't use any of the major improvements listed on the announce thread but > it's the version that's in active development. > We should upgrade to Jetty 9.4.x series from 9.3.x > The latest version right now is 9.4.8.v20171121 . Upgrading it locally > required a few compile time changes only. > Under "Default Sessions" in > https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0 > it states that "In previous versions of Jetty this was referred to as > "hash" session management." . > The patch fixes all the compile time issues. > Currently two tests are failing: > TestRestManager > TestManagedSynonymGraphFilterFactory > Steps to upgrade the Jetty version were : > 1. Modify {{ivy-versions.properties}} to reflect the new version number > 2. Run {{ant jar-checksums}} to generate new JAR checksums -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329214#comment-16329214 ] Munendra S N edited comment on SOLR-11834 at 1/17/18 7:05 PM: -- [~mkhludnev] Any update?? SOLR-9396 is different from SOLR-10571. Former is related to join condition in subquery while searching. Later (SOLR-10571) is related to fl parameter but I couldn't reproduce the behavior. As that JIRA was created by you, would it be possible to provide more details? was (Author: munendrasn): [~mkhludnev] Any update?? SOLR-9396 is a bit different from SOLR-10571. Former is related to join condition in subquery while searching. Later (SOLR-10571) is related to fl parameter but I couldn't reproduce the behavior. As that JIRA was created by you, would it be possible to provide more details? > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329214#comment-16329214 ] Munendra S N edited comment on SOLR-11834 at 1/17/18 7:06 PM: -- [~mkhludnev] Any update?? SOLR-9396 is different from SOLR-10571. If I'm not wrong, Former is related to join condition in subquery while searching. Later (SOLR-10571) is related to fl parameter but I couldn't reproduce the behavior. As that JIRA was created by you, would it be possible to provide more details? was (Author: munendrasn): [~mkhludnev] Any update?? SOLR-9396 is different from SOLR-10571. Former is related to join condition in subquery while searching. Later (SOLR-10571) is related to fl parameter but I couldn't reproduce the behavior. As that JIRA was created by you, would it be possible to provide more details? > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11834) [Ref-Guide] Wrong documentation for subquery transformer
[ https://issues.apache.org/jira/browse/SOLR-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329214#comment-16329214 ] Munendra S N commented on SOLR-11834: - [~mkhludnev] Any update?? SOLR-9396 is a bit different from SOLR-10571. Former is related to join condition in subquery while searching. Later (SOLR-10571) is related to fl parameter but I couldn't reproduce the behavior. As that JIRA was created by you, would it be possible to provide more details? > [Ref-Guide] Wrong documentation for subquery transformer > > > Key: SOLR-11834 > URL: https://issues.apache.org/jira/browse/SOLR-11834 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Munendra S N >Priority: Major > Attachments: SOLR-11834.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > Documentation for subquery transformation mentioned that to retrieve the > field, it should be specified in both fl parameter > https://lucene.apache.org/solr/guide/7_2/transforming-result-documents.html#subquery-result-fields > But there is no such restriction in code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329162#comment-16329162 ] Jason Gerlowski edited comment on SOLR-11766 at 1/17/18 6:31 PM: - A few things: 1. Link to our current Streaming Expressions documentation, for the lazy: http://lucene.apache.org/solr/guide/7_2/streaming-expressions.html 2. I'm a big fan of the Redis-inspired screenshot you attached above. It's a big improvement on making these more compact. A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. The default display for "Swagger" docs comes close to what I'm suggesting: http://petstore.swagger.io/#/. It may be a bit more compact, but is otherwise very similar. Not sure which people prefer aesthetically. Just suggesting an alternative. And lastly, a more general question: bq. ideally we come up with a solution for PDF that's better also, but we are much more limited in what we can do there. This is the second time (that I know of) where we've run into sticking points dealing with formatting in our PDF vs HTML ref-guide. (SOLR-11584 being the other). And I imagine these sorts of issues will continue to come up, as we try to find better, more helpful ways of presenting information to our users. Do we see ourselves continuing to support both formats for the foreseeable future? (I'm not questioning the utility of our PDF release format. Just curious whether anyone else is worried that it'll start to restrict our flexibility sometime soon. Maybe I should've posted this as a mailing list question instead of tacking it on here...) was (Author: gerlowskija): A few things: 1. Link to our current Streaming Expressions documentation, for the lazy: http://lucene.apache.org/solr/guide/7_2/streaming-expressions.html 2. I'm a big fan of the Redis-inspired screenshot you attached above. It's a big improvement on making these more compact. A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. The default display for "Swagger" docs comes close to what I'm suggesting: http://petstore.swagger.io/#/. It may be a bit more compact, but is otherwise very similar. Not sure which people prefer aesthetically. Just suggesting an alternative. And lastly, a more general question: bq. ideally we come up with a solution for PDF that's better also, but we are much more limited in what we can do there. This is the second time (that I know of) where we've run into sticking points dealing with formatting in our PDF vs HTML ref-guide. (SOLR-11584 being the other). And I imagine these sorts of issues will continue to come up, as we try to find better, more helpful ways of presenting information to our users. Do we see ourselves continuing to support both formats for the foreseeable future? (I'm not questioning the utility of our PDF release format. Just curious whether anyone else is worried that it'll start to restrict our flexibility sometime soon. Maybe I should've posted this as a mailing list question instead of tacking it on here...) bq. These ideas focus on the HTML layout of expressions - ideally we come up with a solution for PDF that's better also > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one >
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329162#comment-16329162 ] Jason Gerlowski commented on SOLR-11766: A few things: 1. Link to our current Streaming Expressions documentation, for the lazy: http://lucene.apache.org/solr/guide/7_2/streaming-expressions.html 2. I'm a big fan of the Redis-inspired screenshot you attached above. It's a big improvement on making these more compact. A related but different approach would be to have the a small summary line for each Streaming Expression, that expands-on-click to show more details. The default display for "Swagger" docs comes close to what I'm suggesting: http://petstore.swagger.io/#/. It may be a bit more compact, but is otherwise very similar. Not sure which people prefer aesthetically. Just suggesting an alternative. And lastly, a more general question: bq. ideally we come up with a solution for PDF that's better also, but we are much more limited in what we can do there. This is the second time (that I know of) where we've run into sticking points dealing with formatting in our PDF vs HTML ref-guide. (SOLR-11584 being the other). And I imagine these sorts of issues will continue to come up, as we try to find better, more helpful ways of presenting information to our users. Do we see ourselves continuing to support both formats for the foreseeable future? (I'm not questioning the utility of our PDF release format. Just curious whether anyone else is worried that it'll start to restrict our flexibility sometime soon. Maybe I should've posted this as a mailing list question instead of tacking it on here...) bq. These ideas focus on the HTML layout of expressions - ideally we come up with a solution for PDF that's better also > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329137#comment-16329137 ] Cassandra Targett commented on SOLR-11766: -- I'm all for organizing them into categories - would you provide a list of which evaluators go in which categories? One reason why I didn't start there is because I didn't know how to split them up but I'm guessing you already have a couple of ideas :). The Sources and Decorators pages have similar issues, the Decorators more so. Do those split into any natural categories in your opinion? One place to start with those is to simplify the presentation of each section. I do think it's really worthwhile to have a single place to see a full list of all expression types for those who haven't yet learned if they want a source or a decorator or an evaluator. There are ways we could make the overview "quick reference" page auto-generate (it's not in the screenshot, but I think it could be). > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1192 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1192/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:35881/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:35929/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:35881/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:35929/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([3ECB37B1E54E5056:9406E443529D8586]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:991) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:306) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329107#comment-16329107 ] Joel Bernstein edited comment on SOLR-11766 at 1/17/18 6:02 PM: For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis * Digital Signal Processing * Natural Language Processing Then in each section there could be a user guide for applying the functions and the reference for each function. was (Author: joel.bernstein): For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis * Digital Signal Processing Then in each section there could be a user guide for applying the functions and the reference for each function. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329107#comment-16329107 ] Joel Bernstein edited comment on SOLR-11766 at 1/17/18 5:58 PM: For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis * Digital Signal Processing Then each section there could be a user guide for applying the functions and the reference for each function. was (Author: joel.bernstein): For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis Then each section there could be a user guide for applying the functions and the reference for each function. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329107#comment-16329107 ] Joel Bernstein edited comment on SOLR-11766 at 1/17/18 5:58 PM: For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis * Digital Signal Processing Then in each section there could be a user guide for applying the functions and the reference for each function. was (Author: joel.bernstein): For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis * Digital Signal Processing Then each section there could be a user guide for applying the functions and the reference for each function. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329107#comment-16329107 ] Joel Bernstein edited comment on SOLR-11766 at 1/17/18 5:56 PM: For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis Then each section there could be a user guide for applying the functions and the reference for each function. was (Author: joel.bernstein): For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis Then each section they could be a user guide for applying the functions. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329107#comment-16329107 ] Joel Bernstein commented on SOLR-11766: --- For the Stream Evaluators I think having different sub-sections like: * Statistics * Probability Distributions and Simulations * Interpolation, Derivatives and Integrals * Linear Algebra / Vector and Matrix Math * Machine Learning * Regression and Curve Fitting * Time Series Analysis Then each section they could be a user guide for applying the functions. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #308: Add a suggester that operates on tokenized va...
GitHub user cbeer opened a pull request: https://github.com/apache/lucene-solr/pull/308 Add a suggester that operates on tokenized values from a field The `TokenizingSuggester` is suspiciously similar to the `AnalyzingInfixSuggester` (and presumably it could be merged into or extend the `AnalyzingInfixSuggester`), but with an additional feature (the `tokenizingAnalyzer`) that allows us to pre-tokenizing suggestions into a manageable size (perhaps single words, shingles of multiple words, or perhaps even NLP-extracted noun phrases) . Our use case is providing autocomplete suggestions for searching within OCR text of a document (searching within is powered by highlighting), and we're dealing with some page-level OCR that can easily exceed the 32k size limit for the `AnalyzingInfixSuggester`'s exacttext string field. You can merge this pull request into a Git repository by running: $ git pull https://github.com/cbeer/lucene-solr tokenizing-suggester-upstreamable Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/308.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #308 commit c516bcaabbe6214ba4938859d6775ae7992fed0a Author: Chris BeerDate: 2018-01-16T21:29:51Z Add a suggester that operates on tokenized values from a field --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329038#comment-16329038 ] Cassandra Targett commented on SOLR-11766: -- I've attached a "still dirty" example of one possible approach (StreamQuickRef-sample.png). The idea is that we'd make a new "Streaming Expression Reference" page with 3 expressions per row, each showing a basic example, a simple description, and a link to the main section where the expression is defined with more detail & examples. One drawback of this approach is that the list of available expressions would be in 2 places - once on the Quick Reference page and again on its main page. I call it "dirty" because it still needs some CSS to make the boxes line up better visually as columns, and I'd prefer some space between each box on each row. But I wanted to share it as a possibility even though it's not perfect yet. > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages
[ https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett updated SOLR-11766: - Attachment: StreamQuickRef-sample.png > Ref Guide: redesign Streaming Expression reference pages > > > Key: SOLR-11766 > URL: https://issues.apache.org/jira/browse/SOLR-11766 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, streaming expressions >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Attachments: StreamQuickRef-sample.png > > > There are a very large number of streaming expressions and they need some > special info design to be more easily accessible. The current way we're > presenting them doesn't really work. This issue is to track ideas and POC > patches for possible approaches. > A couple of ideas I have, which may or may not all work together: > # Provide a way to filter the list of commands by expression type (would need > to figure out the types) > # Present the available expressions in smaller sections, similar in UX > concept to https://redis.io/commands. On that page, I can see 9-12 commands > above "the fold" on my laptop screen, as compared to today when I can see > only 1 expression at a time & each expression probably takes more space than > necessary. This idea would require figuring out where people go when they > click a command to get more information. > ## One solution for where people go is to put all the commands back in one > massive page, but this isn't really ideal > ## Another solution would be to have an individual .adoc file for each > expression and present them all individually. > # Some of the Bootstrap.js options may help - collapsing panels or tabs, if > properly designed, may make it easier to see an overview of available > expressions and get more information if interested. > I'll post more ideas as I come up with them. > These ideas focus on the HTML layout of expressions - ideally we come up with > a solution for PDF that's better also, but we are much more limited in what > we can do there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11866) Support efficient subset matching in query elevation rules
[ https://issues.apache.org/jira/browse/SOLR-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-11866: Priority: Major (was: Minor) > Support efficient subset matching in query elevation rules > -- > > Key: SOLR-11866 > URL: https://issues.apache.org/jira/browse/SOLR-11866 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SearchComponents - other >Affects Versions: master (8.0) >Reporter: Bruno Roustant >Priority: Major > > Leverages the SOLR-11865 refactoring by introducing a > SubsetMatchElevationProvider in QueryElevationComponent. This provider calls > a new util class TrieSubsetMatcher to efficiently match all query elevation > rules which subset is contained by the current query list of terms. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 406 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/406/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC 11 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.analysis.TestGraphTokenizers Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001\bttc-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001\bttc-001 C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001\bttc-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001\bttc-001 C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.analysis.TestGraphTokenizers_BBA21DBE22002ED-001 at __randomizedtesting.SeedInfo.seed([BBA21DBE22002ED]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.lucene.store.TestRAFDirectory Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_6326195E0655E791-001\tempDir-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_6326195E0655E791-001\tempDir-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_6326195E0655E791-001\tempDir-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_6326195E0655E791-001\tempDir-001 at __randomizedtesting.SeedInfo.seed([6326195E0655E791]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Resolved] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved SOLR-11592. --- Resolution: Implemented Fix Version/s: 7.3 master (8.0) Thanks Koji! > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.2 >Reporter: Koji Sekiguchi >Assignee: Steve Rowe >Priority: Minor > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328957#comment-16328957 ] ASF subversion and git services commented on SOLR-11592: Commit 03095ce4d20060a1c63570d8a5214e9858693080 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=03095ce ] SOLR-11592: Add OpenNLP language detection to the langid contrib > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.2 >Reporter: Koji Sekiguchi >Assignee: Steve Rowe >Priority: Minor > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328956#comment-16328956 ] ASF subversion and git services commented on SOLR-11592: Commit 2123db0e26ba64a2b0924e714edb38fdd578ee17 in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2123db0 ] SOLR-11592: Add OpenNLP language detection to the langid contrib > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.2 >Reporter: Koji Sekiguchi >Assignee: Steve Rowe >Priority: Minor > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk1.8.0_144) - Build # 152 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/152/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.OverallAnalyticsTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([AB05B8F271C88084]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth Error Message: 2 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=30566, name=jetty-launcher-7464-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at
[jira] [Created] (SOLR-11866) Support efficient subset matching in query elevation rules
Bruno Roustant created SOLR-11866: - Summary: Support efficient subset matching in query elevation rules Key: SOLR-11866 URL: https://issues.apache.org/jira/browse/SOLR-11866 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SearchComponents - other Affects Versions: master (8.0) Reporter: Bruno Roustant Leverages the SOLR-11865 refactoring by introducing a SubsetMatchElevationProvider in QueryElevationComponent. This provider calls a new util class TrieSubsetMatcher to efficiently match all query elevation rules which subset is contained by the current query list of terms. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 318 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/318/ 9 tests failed. FAILED: org.apache.solr.cloud.TestCloudJSONFacetJoinDomain.testRandom Error Message: Error from server at http://127.0.0.1:48892/solr/org.apache.solr.cloud.TestCloudJSONFacetJoinDomain_collection: {"org.apache.solr.cloud.TestCloudJSONFacetJoinDomain_collection":8} Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:48892/solr/org.apache.solr.cloud.TestCloudJSONFacetJoinDomain_collection: {"org.apache.solr.cloud.TestCloudJSONFacetJoinDomain_collection":8} at __randomizedtesting.SeedInfo.seed([12B4E9F85E054F40:60F8CCF7EF65F933]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957) at org.apache.solr.cloud.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:461) at org.apache.solr.cloud.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:465) at org.apache.solr.cloud.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:429) at org.apache.solr.cloud.TestCloudJSONFacetJoinDomain.testRandom(TestCloudJSONFacetJoinDomain.java:370) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21288 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21288/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=21782, name=jetty-launcher-3487-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=21772, name=jetty-launcher-3487-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=21782, name=jetty-launcher-3487-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[JENKINS] Lucene-Solr-7.2-Windows (32bit/jdk1.8.0_144) - Build # 46 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/46/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: The partitioned replica did not get marked down expected:<[down]> but was:<[active]> Stack Trace: org.junit.ComparisonFailure: The partitioned replica did not get marked down expected:<[down]> but was:<[active]> at __randomizedtesting.SeedInfo.seed([C48AE675A0F438C6:4CDED9AF0E08553E]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:240) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
Re: Solr block join query not giving results
> So, _root_ value is created by me not internally by solr. Would that create a problem. I think this is the reason the index is corrupted. Fir the same range query if I search in parts all parts works, if the index is corrupt a particular part should never work On Jan 17, 2018 7:37 PM, "Ishan Chattopadhyaya"wrote: > So, _root_ value is created by me not internally by solr. Would that create a problem. I think this is the reason the index is corrupted. On Wed, Jan 17, 2018 at 5:20 PM, Aashish Agarwal wrote: > No it should not be the case because the query is working for price:[0 TO > 10] and price:[10 TO 20] so index are fine for price:[0 TO 20] still query > fails. > > I user the csv to import data as described in > https://gist.github.com/mkhludnev/6406734#file-t-shirts-xml > So, _root_ value is created by me not internally by solr. Would that > create a problem. > > Thanks, > Aashish > > On Jan 17, 2018 4:46 PM, "Mikhail Khludnev" wrote: > >> Sounds like corrupted index. >> https://issues.apache.org/jira/browse/SOLR-7606 >> >> On Wed, Jan 17, 2018 at 9:00 AM, Aashish Agarwal >> wrote: >> >>> Hi, >>> >>> I am using block join query to get parent object using filter on child. >>> But when the number of results are large than the query fails with >>> ArrayIndexOutOfBoundException. e.g in range query price:[0 TO 20] fails but >>> price[0 TO 10], price:[10 TO 20] works fine. I am using solr 4.6.0. >>> >>> Thanks, >>> Aashish >>> >> >> >> >> -- >> Sincerely yours >> Mikhail Khludnev >> >
Re: Solr block join query not giving results
> So, _root_ value is created by me not internally by solr. Would that create a problem. I think this is the reason the index is corrupted. On Wed, Jan 17, 2018 at 5:20 PM, Aashish Agarwalwrote: > No it should not be the case because the query is working for price:[0 TO > 10] and price:[10 TO 20] so index are fine for price:[0 TO 20] still query > fails. > > I user the csv to import data as described in > https://gist.github.com/mkhludnev/6406734#file-t-shirts-xml > So, _root_ value is created by me not internally by solr. Would that > create a problem. > > Thanks, > Aashish > > On Jan 17, 2018 4:46 PM, "Mikhail Khludnev" wrote: > >> Sounds like corrupted index. >> https://issues.apache.org/jira/browse/SOLR-7606 >> >> On Wed, Jan 17, 2018 at 9:00 AM, Aashish Agarwal >> wrote: >> >>> Hi, >>> >>> I am using block join query to get parent object using filter on child. >>> But when the number of results are large than the query fails with >>> ArrayIndexOutOfBoundException. e.g in range query price:[0 TO 20] fails but >>> price[0 TO 10], price:[10 TO 20] works fine. I am using solr 4.6.0. >>> >>> Thanks, >>> Aashish >>> >> >> >> >> -- >> Sincerely yours >> Mikhail Khludnev >> >
[JENKINS] Lucene-Solr-Tests-master - Build # 2261 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2261/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrClient Error Message: 1 thread leaked from SUITE scope at org.apache.solr.client.solrj.TestLBHttpSolrClient: 1) Thread[id=380, name=qtp73155718-380, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrClient] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.client.solrj.TestLBHttpSolrClient: 1) Thread[id=380, name=qtp73155718-380, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrClient] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([CBB33F860C1E9E8]:0) FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrClient Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=380, name=qtp73155718-380, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrClient] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=380, name=qtp73155718-380, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrClient] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([CBB33F860C1E9E8]:0) Build Log: [...truncated 14771 lines...] [junit4] Suite: org.apache.solr.client.solrj.TestLBHttpSolrClient [junit4] 2> Creating dataDir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.TestLBHttpSolrClient_CBB33F860C1E9E8-001/init-core-data-001 [junit4] 2> 48907 WARN (SUITE-TestLBHttpSolrClient-seed#[CBB33F860C1E9E8]-worker) [] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=7 numCloses=7 [junit4] 2> 48907 INFO (SUITE-TestLBHttpSolrClient-seed#[CBB33F860C1E9E8]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 48908 INFO (SUITE-TestLBHttpSolrClient-seed#[CBB33F860C1E9E8]-worker) []
[jira] [Commented] (SOLR-11859) CloneFieldUpdateProcessorFactory should not add {set=} to content when cloned to multivalued field
[ https://issues.apache.org/jira/browse/SOLR-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328769#comment-16328769 ] Jaap de Jong commented on SOLR-11859: - Removing the "set" modifier in $document->setField() actually helped. > CloneFieldUpdateProcessorFactory should not add {set=} to content when cloned > to multivalued field > -- > > Key: SOLR-11859 > URL: https://issues.apache.org/jira/browse/SOLR-11859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.2 >Reporter: Jaap de Jong >Priority: Minor > > I'm using the CloneFieldUpdateProcessorFactory to copy content from all > string fields _except some predefined fields_ to a multivalued "text_final" > field. This seems to work, however each value value is prepended with > "\{set=" and appended with "}". > Expected result > Just clone all the original values into the multivalued field +without > "\{set=}".+ > In my schema this field is defined as: > {{ multiValued="true"/>}} > The fieldType is defined as: > {{ positionIncrementGap="100">}} > {{}} > {{ replacement=' ' />}} > {{}} > {{}} > {{ words="lang/stopwords_nl.txt"}} > {{format="snowball"/>}} > {{ {{dictionary="lang/nederlands/nl_NL.dic"}} > {{affix="lang/nederlands/nl_NL.aff"}} > {{ignoreCase="true"/>}} > {{}} > {{}} > In my updateRequestProcessorChain the processor is defined as: > {{}} > {{}} > {{s_.*}} > {{}} > {{s_description}} > {{s_image_link}} > {{s_link}} > {{}} > {{}} > {{text_final}} > {{}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11859) CloneFieldUpdateProcessorFactory should not add {set=} to content when cloned to multivalued field
[ https://issues.apache.org/jira/browse/SOLR-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaap de Jong resolved SOLR-11859. - Resolution: Works for Me > CloneFieldUpdateProcessorFactory should not add {set=} to content when cloned > to multivalued field > -- > > Key: SOLR-11859 > URL: https://issues.apache.org/jira/browse/SOLR-11859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.2 >Reporter: Jaap de Jong >Priority: Minor > > I'm using the CloneFieldUpdateProcessorFactory to copy content from all > string fields _except some predefined fields_ to a multivalued "text_final" > field. This seems to work, however each value value is prepended with > "\{set=" and appended with "}". > Expected result > Just clone all the original values into the multivalued field +without > "\{set=}".+ > In my schema this field is defined as: > {{ multiValued="true"/>}} > The fieldType is defined as: > {{ positionIncrementGap="100">}} > {{}} > {{ replacement=' ' />}} > {{}} > {{}} > {{ words="lang/stopwords_nl.txt"}} > {{format="snowball"/>}} > {{ {{dictionary="lang/nederlands/nl_NL.dic"}} > {{affix="lang/nederlands/nl_NL.aff"}} > {{ignoreCase="true"/>}} > {{}} > {{}} > In my updateRequestProcessorChain the processor is defined as: > {{}} > {{}} > {{s_.*}} > {{}} > {{s_description}} > {{s_image_link}} > {{s_link}} > {{}} > {{}} > {{text_final}} > {{}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7119 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7119/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC No tests ran. Build Log: [...truncated 11 lines...] FATAL: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf java.io.IOException: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:197) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:166) at org.eclipse.jgit.api.CleanCommand.cleanPath(CleanCommand.java:176) at org.eclipse.jgit.api.CleanCommand.call(CleanCommand.java:133) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to Windows VBOX at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1696) at hudson.remoting.UserResponse.retrieve(UserRequest.java:313) at hudson.remoting.Channel.call(Channel.java:909) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281) at com.sun.proxy.$Proxy80.clean(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.clean(RemoteGitImpl.java:450) at hudson.plugins.git.extensions.impl.CleanBeforeCheckout.decorateFetchCommand(CleanBeforeCheckout.java:30) at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:858) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160) at hudson.scm.SCM.checkout(SCM.java:495) at hudson.model.AbstractProject.checkout(AbstractProject.java:1203) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499) at hudson.model.Run.execute(Run.java:1727) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) Caused: org.eclipse.jgit.api.errors.JGitInternalException: Could not delete file C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryComponentCustomSortTest_F3775CD4E3D2B44-001\tempDir-001\shard0\collection1\conf at org.eclipse.jgit.api.CleanCommand.call(CleanCommand.java:136) at org.jenkinsci.plugins.gitclient.JGitAPIImpl.clean(JGitAPIImpl.java:1290) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853) at hudson.remoting.UserRequest.perform(UserRequest.java:210) at hudson.remoting.UserRequest.perform(UserRequest.java:53) at hudson.remoting.Request$2.run(Request.java:358) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Email was triggered for:
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328725#comment-16328725 ] Amrit Sarkar commented on SOLR-9272: [~janhoy], Thank you for the feedback, and yes not elegant :) Sorry about the debug lines, by bad. I like default "-p 8983", when both -z and -p are not specified, I will improve and clean the current patch, thank you. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328712#comment-16328712 ] Amrit Sarkar commented on SOLR-7964: Implemented the same patch by [~arcadius] on trunk and uploaded. All tests running successfully, verified via beast round of 100. > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-7964: --- Attachment: SOLR-7964.patch > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328709#comment-16328709 ] Jan Høydahl commented on SOLR-9272: --- Nice work. Sorry for the long silence. * You have a few {{System.out.println}} debug prints still in the patch * HTTP->HTTPS workaround I guess is acceptable if not the most elegant :) Perhaps the tool can assume port 8983 if neither -z or -p is given? Then {{bin/solr zk ls /}} would work if solr is running locally on default port. Most other operations such as start, stop etc will attempt to work with port 8983 if nothing else specified, so it would be nice if we did the same here? > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk-9.0.1) - Build # 151 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/151/ Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=15161, name=searcherExecutor-4525-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9.0.1/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.1/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=15161, name=searcherExecutor-4525-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9.0.1/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.1/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([9AF14106C9B9B212]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=15161, name=searcherExecutor-4525-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9.0.1/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.1/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=15161, name=searcherExecutor-4525-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9.0.1/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.1/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([9AF14106C9B9B212]:0) FAILED: org.apache.solr.core.TestLazyCores.testNoCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([9AF14106C9B9B212:4591E0D7029ED1B7]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:901) at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847) at org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829) at
[jira] [Updated] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching
[ https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Roustant updated SOLR-11865: -- Attachment: SOLR-11865.patch 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch > Refactor QueryElevationComponent to prepare query subset matching > - > > Key: SOLR-11865 > URL: https://issues.apache.org/jira/browse/SOLR-11865 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SearchComponents - other >Affects Versions: master (8.0) >Reporter: Bruno Roustant >Priority: Minor > Labels: QueryComponent > Fix For: master (8.0) > > Attachments: > 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, > SOLR-11865.patch > > > The goal is to prepare a second improvement to support query terms subset > matching or query elevation rules. > Before that, we need to refactor the QueryElevationComponent. We make it > extendible. We introduce the ElevationProvider interface which will be > implemented later in a second patch to support subset matching. The current > full-query match policy becomes a default simple MapElevationProvider. > - Add overridable methods to handle exceptions during the component > initialization. > - Add overridable methods to provide the default values for config properties. > - No functional change beyond refactoring. > - Adapt unit test. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr block join query not giving results
No it should not be the case because the query is working for price:[0 TO 10] and price:[10 TO 20] so index are fine for price:[0 TO 20] still query fails. I user the csv to import data as described in https://gist.github.com/mkhludnev/6406734#file-t-shirts-xml So, _root_ value is created by me not internally by solr. Would that create a problem. Thanks, Aashish On Jan 17, 2018 4:46 PM, "Mikhail Khludnev"wrote: > Sounds like corrupted index. > https://issues.apache.org/jira/browse/SOLR-7606 > > On Wed, Jan 17, 2018 at 9:00 AM, Aashish Agarwal > wrote: > >> Hi, >> >> I am using block join query to get parent object using filter on child. >> But when the number of results are large than the query fails with >> ArrayIndexOutOfBoundException. e.g in range query price:[0 TO 20] fails but >> price[0 TO 10], price:[10 TO 20] works fine. I am using solr 4.6.0. >> >> Thanks, >> Aashish >> > > > > -- > Sincerely yours > Mikhail Khludnev >
Re: Solr block join query not giving results
Sounds like corrupted index. https://issues.apache.org/jira/browse/SOLR-7606 On Wed, Jan 17, 2018 at 9:00 AM, Aashish Agarwalwrote: > Hi, > > I am using block join query to get parent object using filter on child. > But when the number of results are large than the query fails with > ArrayIndexOutOfBoundException. e.g in range query price:[0 TO 20] fails but > price[0 TO 10], price:[10 TO 20] works fine. I am using solr 4.6.0. > > Thanks, > Aashish > -- Sincerely yours Mikhail Khludnev
[jira] [Commented] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328620#comment-16328620 ] Amrit Sarkar commented on SOLR-11712: - [~varunthacker], As per our offline discussion, I tried optimising the tests as much I could have, moved helper functions into utils class. Since TestStreamErrorHandling needs more than one collection, {{configureCluster}} method is overridden. Let me know if I OVERDID the optimisation. > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 122 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/122/ 3 tests failed. FAILED: org.apache.solr.cloud.RestartWhileUpdatingTest.test Error Message: shard1 is not consistent. Got 541 from http://127.0.0.1:36482/wnj/sf/collection1_shard1_replica_n23 (previous client) and got 542 from http://127.0.0.1:41935/wnj/sf/collection1_shard1_replica_n25 Stack Trace: java.lang.AssertionError: shard1 is not consistent. Got 541 from http://127.0.0.1:36482/wnj/sf/collection1_shard1_replica_n23 (previous client) and got 542 from http://127.0.0.1:41935/wnj/sf/collection1_shard1_replica_n25 at __randomizedtesting.SeedInfo.seed([B17D75384A106537:39294AE2E4EC08CF]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1330) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1309) at org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:155) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Created] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching
Bruno Roustant created SOLR-11865: - Summary: Refactor QueryElevationComponent to prepare query subset matching Key: SOLR-11865 URL: https://issues.apache.org/jira/browse/SOLR-11865 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SearchComponents - other Affects Versions: master (8.0) Reporter: Bruno Roustant Fix For: master (8.0) The goal is to prepare a second improvement to support query terms subset matching or query elevation rules. Before that, we need to refactor the QueryElevationComponent. We make it extendible. We introduce the ElevationProvider interface which will be implemented later in a second patch to support subset matching. The current full-query match policy becomes a default simple MapElevationProvider. - Add overridable methods to handle exceptions during the component initialization. - Add overridable methods to provide the default values for config properties. - No functional change beyond refactoring. - Adapt unit test. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11795) Add Solr metrics exporter for Prometheus
[ https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Minoru Osuka updated SOLR-11795: Attachment: SOLR-11795-4.patch > Add Solr metrics exporter for Prometheus > > > Key: SOLR-11795 > URL: https://issues.apache.org/jira/browse/SOLR-11795 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Affects Versions: 7.2 >Reporter: Minoru Osuka >Assignee: Koji Sekiguchi >Priority: Minor > Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, > SOLR-11795-4.patch, SOLR-11795.patch, solr-dashboard.png, > solr-exporter-diagram.png > > > I 'd like to monitor Solr using Prometheus and Grafana. > I've already created Solr metrics exporter for Prometheus. I'd like to > contribute to contrib directory if you don't mind. > !solr-exporter-diagram.png|thumbnail! > !solr-dashboard.png|thumbnail! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21286 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21286/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth Error Message: 1 thread leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=25706, name=jetty-launcher-5576-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=25706, name=jetty-launcher-5576-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([AF8674FC178103B6]:0) Build Log: [...truncated 14168 lines...] [junit4] Suite: org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.security.hadoop.TestImpersonationWithHadoopAuth_AF8674FC178103B6-001/init-core-data-001 [junit4] 2> 2951075 INFO (SUITE-TestImpersonationWithHadoopAuth-seed#[AF8674FC178103B6]-worker) [] o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 2951077 INFO (SUITE-TestImpersonationWithHadoopAuth-seed#[AF8674FC178103B6]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 2951077 INFO (SUITE-TestImpersonationWithHadoopAuth-seed#[AF8674FC178103B6]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null &
[jira] [Updated] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Sekiguchi updated SOLR-11592: -- Affects Version/s: (was: 7.1) 7.2 > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.2 >Reporter: Koji Sekiguchi >Assignee: Steve Rowe >Priority: Minor > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Sekiguchi reassigned SOLR-11592: - Assignee: Steve Rowe > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.1 >Reporter: Koji Sekiguchi >Assignee: Steve Rowe >Priority: Minor > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11592) add another language detector using OpenNLP
[ https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328494#comment-16328494 ] Koji Sekiguchi commented on SOLR-11592: --- Looks good to me. :) > add another language detector using OpenNLP > --- > > Key: SOLR-11592 > URL: https://issues.apache.org/jira/browse/SOLR-11592 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId >Affects Versions: 7.1 >Reporter: Koji Sekiguchi >Priority: Minor > Attachments: SOLR-11592.patch, SOLR-11592.patch > > > We already have two language detectors, lang-detect and Tika's lang detect. > This is a ticket that gives users third option using OpenNLP. :) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 120 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/120/ No tests ran. Build Log: [...truncated 28344 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist [copy] Copying 491 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.03 sec (8.9 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.3.0-src.tgz... [smoker] 31.7 MB in 0.99 sec (32.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.3.0.tgz... [smoker] 73.1 MB in 2.05 sec (35.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.3.0.zip... [smoker] 83.6 MB in 2.62 sec (31.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6284 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.3.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6284 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.3.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 215 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.05 sec (4.7 MB/sec) [smoker] check changes HTML... [smoker] download solr-7.3.0-src.tgz... [smoker] 54.0 MB in 2.16 sec (25.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.3.0.tgz... [smoker] 150.4 MB in 4.96 sec (30.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.3.0.zip... [smoker] 151.4 MB in 5.91 sec (25.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-7.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-7.3.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8 [smoker] *** [WARN] *** Your open file limit is currently 6. [smoker] It should be set to 65000 to avoid operational disruption. [smoker] If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh [smoker] *** [WARN] *** Your Max Processes Limit is currently 10240. [smoker] It should be set to 65000 to avoid operational disruption. [smoker] If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh [smoker] Creating Solr home directory
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1630 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1630/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.api.collections.ShardSplitTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.api.collections.ShardSplitTest: 1) Thread[id=15545, name=qtp434204556-15545, state=TIMED_WAITING, group=TGRP-ShardSplitTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.api.collections.ShardSplitTest: 1) Thread[id=15545, name=qtp434204556-15545, state=TIMED_WAITING, group=TGRP-ShardSplitTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([BC5DCE0B2476A3F7]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.api.collections.ShardSplitTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=15545, name=qtp434204556-15545, state=TIMED_WAITING, group=TGRP-ShardSplitTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=15545, name=qtp434204556-15545, state=TIMED_WAITING, group=TGRP-ShardSplitTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([BC5DCE0B2476A3F7]:0) FAILED: org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication Error Message: Index: 0, Size: 0 Stack Trace: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at __randomizedtesting.SeedInfo.seed([BC5DCE0B2476A3F7:A815955E07711EE9]:0) at java.util.ArrayList.rangeCheck(ArrayList.java:657) at java.util.ArrayList.get(ArrayList.java:433) at org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at