[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4742 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4742/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.search.TestStressRecovery.testStressRecovery Error Message: Captured an uncaught exception in thread: Thread[id=34250, name=READER2, state=RUNNABLE, group=TGRP-TestStressRecovery] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=34250, name=READER2, state=RUNNABLE, group=TGRP-TestStressRecovery] at __randomizedtesting.SeedInfo.seed([5C167C15AD34DCCB:E62C154832DC63C5]:0) Caused by: java.lang.RuntimeException: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([5C167C15AD34DCCB]:0) at org.apache.solr.search.TestStressRecovery$2.run(TestStressRecovery.java:333) Caused by: java.lang.NullPointerException at org.apache.solr.update.TransactionLog$FSReverseReader.(TransactionLog.java:788) at org.apache.solr.update.TransactionLog.getReverseReader(TransactionLog.java:626) at org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1473) at org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1409) at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1587) at org.apache.solr.search.TestRTGBase.getLatestVersions(TestRTGBase.java:103) at org.apache.solr.search.TestStressRecovery$2.run(TestStressRecovery.java:327) Build Log: [...truncated 14828 lines...] [junit4] Suite: org.apache.solr.search.TestStressRecovery [junit4] 2> Creating dataDir: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.search.TestStressRecovery_5C167C15AD34DCCB-001/init-core-data-001 [junit4] 2> 3120427 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: [/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/lib, /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/lib/classes] [junit4] 2> 3120467 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0 [junit4] 2> 3120476 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.s.IndexSchema [null] Schema name=test [junit4] 2> 3120522 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 3120662 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores [junit4] 2> 3120662 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 3120681 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@586c812a [junit4] 2> 3120686 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@586c812a [junit4] 2> 3120686 INFO (SUITE-TestStressRecovery-seed#[5C167C15AD34DCCB]-worker) [] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@586c812a [junit4] 2> 3120688 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: [/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/lib, /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/lib/classes] [junit4] 2> 3120734 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0 [junit4] 2> 3120741 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test [junit4] 2> 3120789 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 3120795 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.c.CoreContainer Creating SolrCore 'collection1' using configuration from instancedir /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1, trusted=true [junit4] 2> 3120795 INFO (coreLoadExecutor-15556-thread-1) [ x:collection1] o.a.s.m.r.SolrJmxReporter JMX monitoring for
[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection
[ https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548763#comment-16548763 ] Gus Heck commented on SOLR-12357: - Ok well the github bot was just a minute or two slower than I expected, sorry for the pull request spam. > TRA: Pre-emptively create next collection > -- > > Key: SOLR-12357 > URL: https://issues.apache.org/jira/browse/SOLR-12357 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > When adding data to a Time Routed Alias (TRA), we sometimes need to create > new collections. Today we only do this synchronously – on-demand when a > document is coming in. But this can add delays as the documents inbound are > held up for a collection to be created. And, there may be a problem like a > lack of resources (e.g. ample SolrCloud nodes with space) that the policy > framework defines. Such problems could be rectified sooner rather than later > assume there is log alerting in place (definitely out of scope here). > Pre-emptive TRA collection needs a time window configuration parameter, > perhaps named something like "preemptiveCreateWindowMs". If a document's > timestamp is within this time window _from the end time of the head/lead > collection_ then the collection can be created pre-eptively. If no data is > being sent to the TRA, no collections will be auto created, nor will it > happen if older data is being added. It may be convenient to effectively > limit this time setting to the _smaller_ of this value and the TRA interval > window, which I think is a fine limitation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 992 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/992/ [...truncated 29 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/103/consoleText [repro] Revision: df08f56420738aa38b8db011ef956b3b488b86c5 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=test -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt -Dtests.timezone=Asia/Macao -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=testSplitWithChaosMonkey -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt -Dtests.timezone=Asia/Macao -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=ReplicationFactorTest -Dtests.method=test -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be-BY -Dtests.timezone=Asia/Dhaka -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=ReplicationFactorTest -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be-BY -Dtests.timezone=Asia/Dhaka -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: d443ed088d2d6f6d6fd0c8965e27bde660ad440f [repro] git fetch [repro] git checkout df08f56420738aa38b8db011ef956b3b488b86c5 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] ReplicationFactorTest [repro] ShardSplitTest [repro] ant compile-test [...truncated 3319 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.ReplicationFactorTest|*.ShardSplitTest" -Dtests.showOutput=onerror -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=be-BY -Dtests.timezone=Asia/Dhaka -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 117074 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.ReplicationFactorTest [repro] 5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest [repro] Re-testing 100% failures at the tip of branch_7x [repro] git fetch [repro] git checkout branch_7x [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] ShardSplitTest [repro] ant compile-test [...truncated 3318 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.ShardSplitTest" -Dtests.showOutput=onerror -Dtests.seed=4112D9BCB6B83DAC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt -Dtests.timezone=Asia/Macao -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 18396 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x: [repro] 1/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest [repro] git checkout d443ed088d2d6f6d6fd0c8965e27bde660ad440f [...truncated 8 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #423: Solr 12357
Github user nsoft commented on the issue: https://github.com/apache/lucene-solr/pull/423 bleh, can't seem to get rid of the commit with the wrong message. Maybe this should get converted to a simple patch file... --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #423: Solr 12357
GitHub user nsoft opened a pull request: https://github.com/apache/lucene-solr/pull/423 Solr 12357 Premptive Creation of collections in Time Routed Aliases You can merge this pull request into a Git repository by running: $ git pull https://github.com/nsoft/lucene-solr SOLR-12357 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/423.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #423 commit cb8845336c9cfef6d587d6e605a02a138c124384 Author: Gus Heck Date: 2018-07-19T03:29:14Z SOLR-12521 commit 687b86db748e7c1efcb0c3de289cc6a560090a35 Author: Gus Heck Date: 2018-07-19T03:29:14Z SOLR-12357 Premptively create collections in time routed aliases commit 4f6ffacb3db9685789746b22932fe62e5620d1ad Author: Gus Heck Date: 2018-07-19T03:40:16Z Merge branch 'SOLR-12357' of github.com:nsoft/lucene-solr into SOLR-12357 commit dcdc7c79e7b6390a0bd41a47a2d0d6e06c6e58b2 Author: Gus Heck Date: 2018-07-19T03:29:14Z SOLR-12357 Premptively create collections in time routed aliases commit 9f326ee312fcf3ad77c87503183abc19a724dd0f Author: Gus Heck Date: 2018-07-19T03:51:30Z Merge branch 'SOLR-12357' of github.com:nsoft/lucene-solr into SOLR-12357 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12563) Unable to delete failed/completed async request statuses
Jerry Bao created SOLR-12563: Summary: Unable to delete failed/completed async request statuses Key: SOLR-12563 URL: https://issues.apache.org/jira/browse/SOLR-12563 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Affects Versions: 7.3.1 Reporter: Jerry Bao /admin/collections?action=DELETESTATUS=true {code} { "responseHeader": { "status": 500, "QTime": 5 }, "error": { "msg": "KeeperErrorCode = Directory not empty for /overseer/collection-map-completed/mn-node_lost_trigger", "trace": "org.apache.zookeeper.KeeperException$NotEmptyException: KeeperErrorCode = Directory not empty for /overseer/collection-map-completed/mn-node_lost_trigger\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:128)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:54)\n\tat org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:876)\n\tat org.apache.solr.common.cloud.SolrZkClient.lambda$delete$1(SolrZkClient.java:244)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.delete(SolrZkClient.java:243)\n\tat org.apache.solr.cloud.DistributedMap.remove(DistributedMap.java:98)\n\tat org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$1.execute(CollectionsHandler.java:753)\n\tat org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation.execute(CollectionsHandler.java:1114)\n\tat org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:242)\n\tat org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:230)\n\tat org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)\n\tat org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)\n\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)\n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\n\tat
[GitHub] lucene-solr issue #422: SOLR-12357 Premptive creation of collections in Time...
Github user nsoft commented on the issue: https://github.com/apache/lucene-solr/pull/422 bleh didn't connect up because the wrong issue number got copied into the title. Closing will try again --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #422: SOLR-12357 Premptive creation of collections ...
Github user nsoft closed the pull request at: https://github.com/apache/lucene-solr/pull/422 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+22) - Build # 2347 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2347/ Java: 64bit/jdk-11-ea+22 -XX:+UseCompressedOops -XX:+UseParallelGC 83 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest Error Message: 20 threads leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) Thread[id=2454, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)2) Thread[id=1337, name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[8965A29E059A07FA]-SendThread(127.0.0.1:35235), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)3) Thread[id=3567, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)4) Thread[id=3568, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)5) Thread[id=2463, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)6) Thread[id=3564, name=zkConnectionManagerCallback-1835-thread-1, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)7) Thread[id=2449, name=TEST-StreamDecoratorTest.testExecutorStream-seed#[8965A29E059A07FA]-SendThread(127.0.0.1:35235), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)8) Thread[id=1342, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)9) Thread[id=1343, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834) 10) Thread[id=2451, name=zkConnectionManagerCallback-1270-thread-1, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at
[GitHub] lucene-solr issue #422: SOLR-12521 Premptive creation of collections in Time...
Github user nsoft commented on the issue: https://github.com/apache/lucene-solr/pull/422 rebased to fix commit message --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #422: SOLR-12521 Premptive creation of collections ...
GitHub user nsoft opened a pull request: https://github.com/apache/lucene-solr/pull/422 SOLR-12521 Premptive creation of collections in Time Routed Aliases You can merge this pull request into a Git repository by running: $ git pull https://github.com/nsoft/lucene-solr SOLR-12357 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/422.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #422 commit cb8845336c9cfef6d587d6e605a02a138c124384 Author: Gus Heck Date: 2018-07-19T03:29:14Z SOLR-12521 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8414) CI fails TestIndexWriter#testSoftUpdateDocuments
[ https://issues.apache.org/jira/browse/LUCENE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nhat Nguyen updated LUCENE-8414: Description: Elastic CI found the following issue. {noformat} [junit4] Suite: org.apache.lucene.index.TestIndexWriter [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> [junit4] > at __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) [junit4] > at org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} I can reproduce this by mucking an unlucky schedule (see unlucky-schedule.patch). was: Elastic CI found the following issue. {noformat} [junit4] Suite: org.apache.lucene.index.TestIndexWriter [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> [junit4] > at __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) [junit4] > at org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} I can reproduce by mucking an unlucky schedule (see unlucky-schedule.patch). > CI fails TestIndexWriter#testSoftUpdateDocuments > > > Key: LUCENE-8414 > URL: https://issues.apache.org/jira/browse/LUCENE-8414 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: master (8.0), 7.5 >Reporter: Nhat Nguyen >Priority: Minor > Attachments: LUCENE-8414-unlucky-schedule.patch, LUCENE-8414.patch > > > Elastic CI found the following issue. > {noformat} > [junit4] Suite: org.apache.lucene.index.TestIndexWriter > [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter > -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 > -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE > -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< > [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> > [junit4] > at > __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) > [junit4] > at > org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) > [junit4] > at java.lang.Thread.run(Thread.java:748) > [junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} > I can reproduce this by mucking an unlucky schedule (see > unlucky-schedule.patch). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8414) CI fails TestIndexWriter#testSoftUpdateDocuments
[ https://issues.apache.org/jira/browse/LUCENE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548719#comment-16548719 ] Nhat Nguyen commented on LUCENE-8414: - The problem is that SegmentInfos is accessed/updated under IW lock. However, testSoftUpdateDocuments may see a partial update of merge changes of SegmentInfos. /cc [~simonw] > CI fails TestIndexWriter#testSoftUpdateDocuments > > > Key: LUCENE-8414 > URL: https://issues.apache.org/jira/browse/LUCENE-8414 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: master (8.0), 7.5 >Reporter: Nhat Nguyen >Priority: Minor > Attachments: LUCENE-8414-unlucky-schedule.patch, LUCENE-8414.patch > > > Elastic CI found the following issue. > {noformat} > [junit4] Suite: org.apache.lucene.index.TestIndexWriter > [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter > -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 > -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE > -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< > [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> > [junit4] > at > __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) > [junit4] > at > org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) > [junit4] > at java.lang.Thread.run(Thread.java:748) > [junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} > I can reproduce this by mucking an unlucky schedule (see > unlucky-schedule.patch). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-8414) CI fails TestIndexWriter#testSoftUpdateDocuments
[ https://issues.apache.org/jira/browse/LUCENE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nhat Nguyen reassigned LUCENE-8414: --- Assignee: Simon Willnauer > CI fails TestIndexWriter#testSoftUpdateDocuments > > > Key: LUCENE-8414 > URL: https://issues.apache.org/jira/browse/LUCENE-8414 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: master (8.0), 7.5 >Reporter: Nhat Nguyen >Assignee: Simon Willnauer >Priority: Minor > Attachments: LUCENE-8414-unlucky-schedule.patch, LUCENE-8414.patch > > > Elastic CI found the following issue. > {noformat} > [junit4] Suite: org.apache.lucene.index.TestIndexWriter > [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter > -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 > -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE > -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< > [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> > [junit4] > at > __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) > [junit4] > at > org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) > [junit4] > at java.lang.Thread.run(Thread.java:748) > [junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} > I can reproduce this by mucking an unlucky schedule (see > unlucky-schedule.patch). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
[ https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548716#comment-16548716 ] Cao Manh Dat commented on SOLR-12297: - I attached a patch that pass {{ant precommit}}. The problem here I want to discuss here is how we commit the patch. There are a couple of ways # Commit the patch on master/branch_7x since {{Http2SolrClient}} are marked as {{@experimental}}, and it can't be used to talk any solr servers. Users won't use it. # Commit the patch on master and enable Http2 support in "bin/solr" by modifying {{jetty-http.xml}}. Users may try to use it, and we may get some reports about problems of new client # Commit the patch on jira/http2 branch, cook everything relate to http2 there. > Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests. > > > Key: SOLR-12297 > URL: https://issues.apache.org/jira/browse/SOLR-12297 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > Attachments: SOLR-12297.patch, SOLR-12297.patch, SOLR-12297.patch, > starburst-ivy-fixes.patch > > > Blocking or async support as well as HTTP2 compatible with multiplexing. > Once it supports enough and is stable, replace internal usage, allowing > async, and eventually move to HTTP2 connector and allow multiplexing. Could > support HTTP1.1 and HTTP2 on different ports depending on state of the world > then. > The goal of the client itself is to work against HTTP1.1 or HTTP2 with > minimal or no code path differences and the same for async requests (should > initially work for both 1.1 and 2 and share majority of code). > The client should also be able to replace HttpSolrClient and plug into the > other clients the same way. > I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually > though. > I evaluated some clients and while there are a few options, I went with > Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 > beta) and we would have to update to a new API for Apache HttpClient anyway. > Meanwhile, the Jetty guys have been very supportive of helping Solr with any > issues and I like having the client and server from the same project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8414) CI fails TestIndexWriter#testSoftUpdateDocuments
[ https://issues.apache.org/jira/browse/LUCENE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nhat Nguyen updated LUCENE-8414: Attachment: LUCENE-8414-unlucky-schedule.patch > CI fails TestIndexWriter#testSoftUpdateDocuments > > > Key: LUCENE-8414 > URL: https://issues.apache.org/jira/browse/LUCENE-8414 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: master (8.0), 7.5 >Reporter: Nhat Nguyen >Priority: Minor > Attachments: LUCENE-8414-unlucky-schedule.patch > > > Elastic CI found the following issue. > {noformat} > [junit4] Suite: org.apache.lucene.index.TestIndexWriter > [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter > -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 > -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE > -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< > [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> > [junit4] > at > __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) > [junit4] > at > org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) > [junit4] > at java.lang.Thread.run(Thread.java:748) > [junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} > I can reproduce by mucking an unlucky schedule (see unlucky-schedule.patch). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8414) CI fails TestIndexWriter#testSoftUpdateDocuments
Nhat Nguyen created LUCENE-8414: --- Summary: CI fails TestIndexWriter#testSoftUpdateDocuments Key: LUCENE-8414 URL: https://issues.apache.org/jira/browse/LUCENE-8414 Project: Lucene - Core Issue Type: Bug Components: core/index Affects Versions: master (8.0), 7.5 Reporter: Nhat Nguyen Elastic CI found the following issue. {noformat} [junit4] Suite: org.apache.lucene.index.TestIndexWriter [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AA5B403FFC4459A5 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=Antarctica/Mawson -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 0.08s J1 | TestIndexWriter.testSoftUpdateDocuments <<< [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> [junit4] > at __randomizedtesting.SeedInfo.seed([AA5B403FFC4459A5:6F9256CD24240312]:0) [junit4] > at org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3168) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter_AA5B403FFC4459A5-001{noformat} I can reproduce by mucking an unlucky schedule (see unlucky-schedule.patch). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
[ https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-12297: Attachment: SOLR-12297.patch > Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests. > > > Key: SOLR-12297 > URL: https://issues.apache.org/jira/browse/SOLR-12297 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > Attachments: SOLR-12297.patch, SOLR-12297.patch, SOLR-12297.patch, > starburst-ivy-fixes.patch > > > Blocking or async support as well as HTTP2 compatible with multiplexing. > Once it supports enough and is stable, replace internal usage, allowing > async, and eventually move to HTTP2 connector and allow multiplexing. Could > support HTTP1.1 and HTTP2 on different ports depending on state of the world > then. > The goal of the client itself is to work against HTTP1.1 or HTTP2 with > minimal or no code path differences and the same for async requests (should > initially work for both 1.1 and 2 and share majority of code). > The client should also be able to replace HttpSolrClient and plug into the > other clients the same way. > I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually > though. > I evaluated some clients and while there are a few options, I went with > Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 > beta) and we would have to update to a new API for Apache HttpClient anyway. > Meanwhile, the Jetty guys have been very supportive of helping Solr with any > issues and I like having the client and server from the same project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+22) - Build # 7426 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7426/ Java: 64bit/jdk-11-ea+22 -XX:+UseCompressedOops -XX:+UseSerialGC 30 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest Error Message: 4 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=1685, name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[1C2CFB6298AA3C25]-EventThread, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)2) Thread[id=1686, name=zkConnectionManagerCallback-557-thread-1, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11-ea/java.lang.Thread.run(Thread.java:834)3) Thread[id=1684, name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[1C2CFB6298AA3C25]-SendThread(127.0.0.1:52134), state=RUNNABLE, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method) at java.base@11-ea/sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:339) at java.base@11-ea/sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:167) at java.base@11-ea/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124) at java.base@11-ea/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:136) at app//org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1144)4) Thread[id=1683, name=Connection evictor, state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@11-ea/java.lang.Thread.run(Thread.java:834) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=1685, name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[1C2CFB6298AA3C25]-EventThread, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502) 2) Thread[id=1686, name=zkConnectionManagerCallback-557-thread-1, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11-ea/java.lang.Thread.run(Thread.java:834) 3) Thread[id=1684, name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[1C2CFB6298AA3C25]-SendThread(127.0.0.1:52134),
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548627#comment-16548627 ] Varun Thacker commented on SOLR-11598: -- To recap the changes in the final patch to make it easier for reviewing * The inner class have been extracted into their own file. It's all in the export package * There is no longer a limit on the number of sort fields. Earlier it was 4 * One optimization around setting queue size i.e if total hits is less than 30k create the queueSize with totalHits {code:java} int queueSize = 3; if (totalHits < 3) { queueSize = totalHits; }{code} * For docs that have the same sort value, we respect the index order. This required setting docBase in SortDoc * DoubleValue#setCurrentValue , the following if statement now throws an assertion error. LongValue, IntValue and FloatValue were doing the same thing {code:java} if (docId < lastDocID) { throw new AssertionError("docs were sent out-of-order: lastDocID=" + lastDocID + " vs doc=" + docId); }{code} * Some superficial changes to StringValue which borrows a technique from FacetFieldProcessorByHashDV * More tests {quote} Today when creating the priority queue , we do doc-value seeks for all the sort fields. When we stream out docs we again make doc-value seeks against the fl fields . In most common use-cases I'd imagine fl = sort fields , so if we can pre-collect the values while sorting it , we can reduce the doc-value seeks potentially bringing speed improvements. I believe Amrit is already working on this and can be tackled in another Jira {quote} We'll create a separate followup Jira for this I've run 100 iterations of TestExportWriter and StreamingTest. TODO: * fix precommit. It's currently failing while building the ref guide. Feedback welcome! > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, streaming-export > reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-11598: - Attachment: SOLR-11598.patch > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, streaming-export > reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2346 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2346/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild Error Message: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException Stack Trace: java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException at __randomizedtesting.SeedInfo.seed([A7A9A98A427B0D16:7824CB357C125874]:0) at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: junit.framework.AssertionFailedError: Unexpected wrapped
[jira] [Commented] (SOLR-12555) Replace try-fail-catch test patterns
[ https://issues.apache.org/jira/browse/SOLR-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548544#comment-16548544 ] Bar Rotstein commented on SOLR-12555: - I don't mind working on this too if you need help, perhaps we can split the work to different packages, and each would work on designated packages, if that's fine by you. > Replace try-fail-catch test patterns > > > Key: SOLR-12555 > URL: https://issues.apache.org/jira/browse/SOLR-12555 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Trivial > Attachments: SOLR-12555.txt > > > I recently added some test code through SOLR-12427 which used the following > test anti-pattern: > {code} > try { > actionExpectedToThrowException(); > fail("I expected this to throw an exception, but it didn't"); > catch (Exception e) { > assertOnThrownException(e); > } > {code} > Hoss (rightfully) objected that this should instead be written using the > formulation below, which is clearer and more concise. > {code} > SolrException e = expectThrows(() -> {...}); > {code} > We should remove many of these older formulations where it makes sense. Many > of them were written before {{expectThrows}} was introduced, and having the > old style assertions around makes it easier for them to continue creeping in. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548537#comment-16548537 ] Joel Bernstein commented on SOLR-11598: --- Yep, I see it too. It looks like the old code is using just the segment level docId as a tie breaker rather then global docId. Good catch! > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at >
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548510#comment-16548510 ] Varun Thacker commented on SOLR-11598: -- Hi [~joel.bernstein] , I figured out the issue! In SortDoc we were never setting docBase . So it was always -1 . Setting this correctly fixes the problem If we index 4 documents in this fashion {code:java} assertU(adoc("id","1", "stringdv","a")); assertU(adoc("id","2", "stringdv","a")); assertU(commit()); assertU(adoc("id","3", "stringdv","a")); assertU(adoc("id","4", "stringdv","a")); assertU(commit());{code} And then sort on either "stringdv asc" or "stringdv desc" we will always get docs in this order: 1,2,3,4 If you try this on 7.4 the order will be : 1,3,2,4 I'll clean up my local patch and upload it shortly. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at >
Custom Solr Facet Sort
I posted this in solr-user group but perhaps, this is the right group to ask questions on internals of solr? Hello all, If I want to plug in my own sorting for facets, what would be the best approach. I know, out of the box, solr supports sort by facet count and sort by alpha. I want to plug in my own sorting (say by relevancy). Is there a way to do that? Where should I start with if I need to write a Custom Facet Component? Basically I want to plug the scores calculated in earlier steps for the documents matched, do some kind of aggregation of the scores of the documents that fall under a facet and use this aggregate score to rank the facets than doing the facet ranking based on counts. I am assuming this is something possible. Just looking for some guidance as to where to start with Appreciate the responses from the community Thanks
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1977 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1977/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir Error Message: Captured an uncaught exception in thread: Thread[id=33044, name=cdcr-replicator-12099-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=33044, name=cdcr-replicator-12099-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest] Caused by: java.lang.AssertionError: 1606364740387340288 != 1606364740384194560 at __randomizedtesting.SeedInfo.seed([8FF7A505CA5F3397]:0) at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611) at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125) at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 14186 lines...] [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest [junit4] 2> Creating dataDir: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_8FF7A505CA5F3397-001/init-core-data-001 [junit4] 2> 3568691 INFO (SUITE-CdcrBidirectionalTest-seed#[8FF7A505CA5F3397]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true [junit4] 2> 3568691 INFO (SUITE-CdcrBidirectionalTest-seed#[8FF7A505CA5F3397]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2> 3568691 INFO (SUITE-CdcrBidirectionalTest-seed#[8FF7A505CA5F3397]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 3568693 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[8FF7A505CA5F3397]) [] o.a.s.SolrTestCaseJ4 ###Starting testBiDir [junit4] 2> 3568693 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[8FF7A505CA5F3397]) [] o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_8FF7A505CA5F3397-001/cdcr-cluster2-001 [junit4] 2> 3568693 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[8FF7A505CA5F3397]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 3568693 INFO (Thread-6464) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 3568693 INFO (Thread-6464) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 3568696 ERROR (Thread-6464) [] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit4] 2> 3568798 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[8FF7A505CA5F3397]) [] o.a.s.c.ZkTestServer start zk server on port:33287 [junit4] 2> 3568800 INFO (zkConnectionManagerCallback-9904-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 3568804 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11 [junit4] 2> 3568805 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 3568805 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 3568805 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.session node0 Scavenging every 66ms [junit4] 2> 3568805 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@6edaed9c{/solr,null,AVAILABLE} [junit4] 2> 3568806 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.AbstractConnector Started ServerConnector@11ff6c98{HTTP/1.1,[http/1.1]}{127.0.0.1:46660} [junit4] 2> 3568806 INFO (jetty-launcher-9901-thread-1) [] o.e.j.s.Server Started @3570696ms [junit4] 2> 3568806 INFO (jetty-launcher-9901-thread-1) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=46660} [junit4] 2> 3568806 ERROR (jetty-launcher-9901-thread-1) [] o.a.s.u.StartupLoggingUtils Missing Java Option
Re: Explore alternative build systems - LUCENE-5755
if you're thinking of scoping (modules used only for certain parts of build then +1 vote for maven if you're thinking of using multiple implementations for your artifact then +1 for ivy Martin Gainty __ _ _ _ _ _ ___ _ _ _ _ _ |_ _| |_ ___ | _ |___ ___ ___| |_ ___ | __|___| _| |_ _ _ _ ___ ___ ___ | __|___ _ _ ___ _| |___| |_|_|___ ___ | | | | -_| | | . | .'| _| | -_| |__ | . | _| _| | | | .'| _| -_| | __| . | | | | . | .'| _| | . | | |_| |_|_|___| |__|__| _|__,|___|_|_|___| |_|___|_| |_| |_|__,|_| |___| |__| |___|___|_|_|___|__,|_| |_|___|_|_| |_| From: Shawn Heisey Sent: Tuesday, July 17, 2018 11:12 AM To: dev@lucene.apache.org Subject: Re: Explore alternative build systems - LUCENE-5755 On 7/17/2018 3:35 AM, Diego Ceccarelli (BLOOMBERG/ LONDON) wrote: > Thanks Dawid, I agree it is a massive job, I was curious to know if > there is interest and we can put together a group of people interested > in doing the job. If that work gives us a build system that is easier for everyone to understand and change, it would be worth pursuing. If it doesn't make things easier, then I don't think we should change it. Dependency management is only a small part of what the build system does. I have no particular attachment to ant/ivy. The current build system does work, and it would be important that any new build system be tortured to make sure it's more robustthan the ant/ivy build. It would indeed be a big job. If you tackle it, I would suggest setting it up so it exists in parallel with the ant build system for at least all of 8.x. In the interests of stability, I don't think the work should affect branch_7x. I really think that any significant work on the build system, whether we switch systems or not, should involve some detailed design and howto documentation, probably on the project wikis. Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 104 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/104/ 13 tests failed. FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypes Error Message: unexpected shard state expected: but was: Stack Trace: java.lang.AssertionError: unexpected shard state expected: but was: at __randomizedtesting.SeedInfo.seed([9810CE06F6BAA875:20D39AA60A617D00]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.apache.solr.cloud.api.collections.ShardSplitTest.verifyShard(ShardSplitTest.java:380) at org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypes(ShardSplitTest.java:372) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548422#comment-16548422 ] Varun Thacker commented on SOLR-11598: -- I believe ties are handled with this condition in SortDoc#lessThan {code:java} return docId+docBase > sd.docId+sd.docBase;{code} In the patch if I've added TestExportWriter#testEqualValue , where this condition is triggered . > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at >
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548418#comment-16548418 ] Joel Bernstein commented on SOLR-11598: --- That's interesting that the ties are handled differently. I guess there is a different tie breaker or no tie breaker in the export handler? > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at >
[jira] [Commented] (SOLR-12507) Modify collection API should support un-setting properties
[ https://issues.apache.org/jira/browse/SOLR-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548396#comment-16548396 ] ASF subversion and git services commented on SOLR-12507: Commit ad3c00ccc8e84d05517cf60e9fb0e49d4cca0283 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad3c00c ] SOLR-12507: clean up italics in MODIFYCOLLECTION example > Modify collection API should support un-setting properties > -- > > Key: SOLR-12507 > URL: https://issues.apache.org/jira/browse/SOLR-12507 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12507-null-values.patch, SOLR-12507.patch, > SOLR-12507.patch > > > Modify Collection API can only create/set values today. There should be a way > to delete values using the API. > Proposed syntax: > {code} > /admin/collections?action=MODIFYCOLLECTION=policy= > {code} > A blank parameter value will delete the property from the collection. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11453) Create separate logger for slow requests
[ https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548398#comment-16548398 ] ASF subversion and git services commented on SOLR-11453: Commit 2cb716952054132a2c267285063658681a9116bb in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2cb7169 ] SOLR-11453: Ref Guide: add info on location of solr_slow_requests.log > Create separate logger for slow requests > > > Key: SOLR-11453 > URL: https://issues.apache.org/jira/browse/SOLR-11453 > Project: Solr > Issue Type: Improvement > Components: logging >Reporter: Shawn Heisey >Assignee: Varun Thacker >Priority: Minor > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, > SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, > slowlog-informational.patch, slowlog-informational.patch, > slowlog-informational.patch > > > There is some desire on the mailing list to create a separate logfile for > slow queries. Currently it is not possible to do this cleanly, because the > WARN level used by slow query logging within the SolrCore class is also used > for other events that SolrCore can log. Those messages would be out of place > in a slow query log. They should typically stay in main solr logfile. > I propose creating a custom logger for slow queries, similar to what has been > set up for request logging. In the SolrCore class, which is > org.apache.solr.core.SolrCore, there is a special logger at > org.apache.solr.core.SolrCore.Request. This is not a real class, just a > logger which makes it possible to handle those log messages differently than > the rest of Solr's logging. I propose setting up another custom logger > within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size
[ https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548397#comment-16548397 ] ASF subversion and git services commented on SOLR-11277: Commit e32e91d1c866bd43fd917056f7ab0ed6c84f546c in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e32e91d ] SOLR-11277: Ref Guide: add parameters for defining kilo/mega/gigabyte suffixes > Add auto hard commit setting based on tlog size > --- > > Key: SOLR-11277 > URL: https://issues.apache.org/jira/browse/SOLR-11277 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Rupa Shankar >Assignee: Anshum Gupta >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11277.01.patch, SOLR-11277.02.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > max_size_auto_commit.patch > > Time Spent: 20m > Remaining Estimate: 0h > > When indexing documents of variable sizes and at variable schedules, it can > be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. > We’ve had some occurrences of really huge tlogs, resulting in serious issues, > so in an attempt to avoid this, it would be great to have a “maxSize” setting > based on the tlog size on disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12507) Modify collection API should support un-setting properties
[ https://issues.apache.org/jira/browse/SOLR-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548392#comment-16548392 ] ASF subversion and git services commented on SOLR-12507: Commit d6afe1d016bfd3226c21605fa707bacf7bc7d163 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6afe1d ] SOLR-12507: clean up italics in MODIFYCOLLECTION example > Modify collection API should support un-setting properties > -- > > Key: SOLR-12507 > URL: https://issues.apache.org/jira/browse/SOLR-12507 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12507-null-values.patch, SOLR-12507.patch, > SOLR-12507.patch > > > Modify Collection API can only create/set values today. There should be a way > to delete values using the API. > Proposed syntax: > {code} > /admin/collections?action=MODIFYCOLLECTION=policy= > {code} > A blank parameter value will delete the property from the collection. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size
[ https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548393#comment-16548393 ] ASF subversion and git services commented on SOLR-11277: Commit 722f7dabd02342cea60aacd96d38f7199877a29c in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=722f7da ] SOLR-11277: Ref Guide: add parameters for defining kilo/mega/gigabyte suffixes > Add auto hard commit setting based on tlog size > --- > > Key: SOLR-11277 > URL: https://issues.apache.org/jira/browse/SOLR-11277 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Rupa Shankar >Assignee: Anshum Gupta >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11277.01.patch, SOLR-11277.02.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > max_size_auto_commit.patch > > Time Spent: 20m > Remaining Estimate: 0h > > When indexing documents of variable sizes and at variable schedules, it can > be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. > We’ve had some occurrences of really huge tlogs, resulting in serious issues, > so in an attempt to avoid this, it would be great to have a “maxSize” setting > based on the tlog size on disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11453) Create separate logger for slow requests
[ https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548394#comment-16548394 ] ASF subversion and git services commented on SOLR-11453: Commit d443ed088d2d6f6d6fd0c8965e27bde660ad440f in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d443ed0 ] SOLR-11453: Ref Guide: add info on location of solr_slow_requests.log > Create separate logger for slow requests > > > Key: SOLR-11453 > URL: https://issues.apache.org/jira/browse/SOLR-11453 > Project: Solr > Issue Type: Improvement > Components: logging >Reporter: Shawn Heisey >Assignee: Varun Thacker >Priority: Minor > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, > SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, > slowlog-informational.patch, slowlog-informational.patch, > slowlog-informational.patch > > > There is some desire on the mailing list to create a separate logfile for > slow queries. Currently it is not possible to do this cleanly, because the > WARN level used by slow query logging within the SolrCore class is also used > for other events that SolrCore can log. Those messages would be out of place > in a slow query log. They should typically stay in main solr logfile. > I propose creating a custom logger for slow queries, similar to what has been > set up for request logging. In the SolrCore class, which is > org.apache.solr.core.SolrCore, there is a special logger at > org.apache.solr.core.SolrCore.Request. This is not a real class, just a > logger which makes it possible to handle those log messages differently than > the rest of Solr's logging. I propose setting up another custom logger > within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8312) Leverage impacts for SynonymQuery
[ https://issues.apache.org/jira/browse/LUCENE-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548361#comment-16548361 ] Michael McCandless commented on LUCENE-8312: Yay nightly benchmarks! > Leverage impacts for SynonymQuery > - > > Key: LUCENE-8312 > URL: https://issues.apache.org/jira/browse/LUCENE-8312 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-8312.patch, LUCENE-8312.patch > > > Now that we expose raw impacts, we could leverage them for synonym queries. > It would be a matter of summing up term frequencies for each unique norm > value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8412) Remove trackDocScores from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548359#comment-16548359 ] Michael McCandless commented on LUCENE-8412: +1, nice. > Remove trackDocScores from TopFieldCollector factory methods > > > Key: LUCENE-8412 > URL: https://issues.apache.org/jira/browse/LUCENE-8412 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8412.patch > > > Computing scores on top hits is fine, but the current way it is implemented - > at collection time - requires to read/decode more freqs/norms and compute > more scores than necessary. It would be more efficient to compute scores of > top hits as a post-collection step by only advancing the scorer to hits that > made the top-N list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 751 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/751/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC No tests ran. Build Log: [...truncated 4969 lines...] ERROR: command execution failed. ERROR: Step ‘Archive the artifacts’ failed: no workspace for Lucene-Solr-7.x-MacOSX #751 ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for Lucene-Solr-7.x-MacOSX #751 ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for Lucene-Solr-7.x-MacOSX #751 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ERROR: MacOSX VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: MacOSX VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: MacOSX VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= ERROR: MacOSX VBOX is offline; cannot locate ANT 1.8.2 Setting ANT_1_8_2_HOME= - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548329#comment-16548329 ] Varun Thacker commented on SOLR-11598: -- I wrote a test that tries validating the export handler vs the select handler ( rows=num_docs ) The assert never passed. Although the results of the export are correct . The difference is that for docs with the same value the ordering is different when it comes to export vs select. [~joel.bernstein] does that sound right to you? > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at >
[jira] [Updated] (SOLR-10981) Allow update to load gzip files
[ https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lundgren updated SOLR-10981: --- Description: We currently import large CSV files. We store them in gzip files as they compress at around 80%. To import them we must gunzip them and then import them. After that we no longer need the decompressed files. This patch allows directly opening either URL, or local files that are gzipped. For URLs, to determine if the file is gzipped, it will check the content encoding=="gzip" or if the file ends in ".gz" For files, if the file ends in ".gz" then it will assume the file is gzipped. I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git. was: We currently import large CSV files. We store them in gzip files as they compress at around 80%. To import them we must gunzip them and then import them. After that we no longer need the decompressed files. This patch allows directly opening either URL, or local files that are gzipped. For URLs, to determine if the file is gzipped, it will check the content encoding=="gzip" or if the file ends in ".gz" For files, if the file ends in ".gz" then it will assume the file is gzipped. I have tested the patch with 4.10.4, 6.6.0 and master from git. > Allow update to load gzip files > > > Key: SOLR-10981 > URL: https://issues.apache.org/jira/browse/SOLR-10981 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.6 >Reporter: Andrew Lundgren >Priority: Major > Labels: patch > Fix For: master (8.0), 7.5 > > Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, > SOLR-10981.patch > > > We currently import large CSV files. We store them in gzip files as they > compress at around 80%. > To import them we must gunzip them and then import them. After that we no > longer need the decompressed files. > This patch allows directly opening either URL, or local files that are > gzipped. > For URLs, to determine if the file is gzipped, it will check the content > encoding=="gzip" or if the file ends in ".gz" > For files, if the file ends in ".gz" then it will assume the file is gzipped. > I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12555) Replace try-fail-catch test patterns
[ https://issues.apache.org/jira/browse/SOLR-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548267#comment-16548267 ] Jason Gerlowski commented on SOLR-12555: [^SOLR-12555.txt] Attaching a file with all the instances of this I could find with some simple grep. There's a lot more of it than I anticipated. Will probably have to get to this in a piece-meal fashion when I get the time. > Replace try-fail-catch test patterns > > > Key: SOLR-12555 > URL: https://issues.apache.org/jira/browse/SOLR-12555 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Trivial > Attachments: SOLR-12555.txt > > > I recently added some test code through SOLR-12427 which used the following > test anti-pattern: > {code} > try { > actionExpectedToThrowException(); > fail("I expected this to throw an exception, but it didn't"); > catch (Exception e) { > assertOnThrownException(e); > } > {code} > Hoss (rightfully) objected that this should instead be written using the > formulation below, which is clearer and more concise. > {code} > SolrException e = expectThrows(() -> {...}); > {code} > We should remove many of these older formulations where it makes sense. Many > of them were written before {{expectThrows}} was introduced, and having the > old style assertions around makes it easier for them to continue creeping in. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12555) Replace try-fail-catch test patterns
[ https://issues.apache.org/jira/browse/SOLR-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-12555: --- Attachment: SOLR-12555.txt > Replace try-fail-catch test patterns > > > Key: SOLR-12555 > URL: https://issues.apache.org/jira/browse/SOLR-12555 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Trivial > Attachments: SOLR-12555.txt > > > I recently added some test code through SOLR-12427 which used the following > test anti-pattern: > {code} > try { > actionExpectedToThrowException(); > fail("I expected this to throw an exception, but it didn't"); > catch (Exception e) { > assertOnThrownException(e); > } > {code} > Hoss (rightfully) objected that this should instead be written using the > formulation below, which is clearer and more concise. > {code} > SolrException e = expectThrows(() -> {...}); > {code} > We should remove many of these older formulations where it makes sense. Many > of them were written before {{expectThrows}} was introduced, and having the > old style assertions around makes it easier for them to continue creeping in. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12562) Remove redundant RealTimeGetCompnent.toSolrDoc and use DocStreamer.convertLuceneDocToSolrDoc
Erick Erickson created SOLR-12562: - Summary: Remove redundant RealTimeGetCompnent.toSolrDoc and use DocStreamer.convertLuceneDocToSolrDoc Key: SOLR-12562 URL: https://issues.apache.org/jira/browse/SOLR-12562 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Assignee: Erick Erickson This code looks really redundant so we should remove one. The one in RealTimeGet is the only used locally so my vote is to remove that one. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Ignore the Lucene JIRA I just raised by mistake.
Just raised a JIRA in Lucene that should have been in Solr. Deleted it. Sorry for the noise - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Deleted] (LUCENE-8413) Remove redundant RealTimeGetCompnent.toSolrDoc and use DocStreamer.convertLuceneDocToSolrDoc
[ https://issues.apache.org/jira/browse/LUCENE-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson deleted LUCENE-8413: --- > Remove redundant RealTimeGetCompnent.toSolrDoc and use > DocStreamer.convertLuceneDocToSolrDoc > - > > Key: LUCENE-8413 > URL: https://issues.apache.org/jira/browse/LUCENE-8413 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > This code looks really redundant so we should remove one. The one in > RealTimeGet is the only usage so my vote is to remove that one. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8413) Remove redundant RealTimeGetCompnent.toSolrDoc and use DocStreamer.convertLuceneDocToSolrDoc
Erick Erickson created LUCENE-8413: -- Summary: Remove redundant RealTimeGetCompnent.toSolrDoc and use DocStreamer.convertLuceneDocToSolrDoc Key: LUCENE-8413 URL: https://issues.apache.org/jira/browse/LUCENE-8413 Project: Lucene - Core Issue Type: Improvement Reporter: Erick Erickson Assignee: Erick Erickson This code looks really redundant so we should remove one. The one in RealTimeGet is the only usage so my vote is to remove that one. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12561) Port ExtractionDateUtil to java.time API
[ https://issues.apache.org/jira/browse/SOLR-12561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12561: Attachment: SOLR-12561.patch > Port ExtractionDateUtil to java.time API > > > Key: SOLR-12561 > URL: https://issues.apache.org/jira/browse/SOLR-12561 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Solr Cell (Tika extraction) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > Attachments: SOLR-12561.patch, SOLR-12561.patch > > > The ExtractionDateUtil class in the extraction contrib uses > SimpleDateFormatter. The Java 8 java.time API is superior; you can find > articles out there why. One thing that comes to mind is less timezone > bugginess – SOLR-10243. Although the API may be a bit baroque IMO > (over-engineered). Here I'd like to switch over the API and furthermore have > the patterns be pre-parsed so that at runtime we don't need to re-parse the > patterns. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8412) Remove trackDocScores from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548182#comment-16548182 ] Alan Woodward commented on LUCENE-8412: --- +1, nice cleanup > Remove trackDocScores from TopFieldCollector factory methods > > > Key: LUCENE-8412 > URL: https://issues.apache.org/jira/browse/LUCENE-8412 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8412.patch > > > Computing scores on top hits is fine, but the current way it is implemented - > at collection time - requires to read/decode more freqs/norms and compute > more scores than necessary. It would be more efficient to compute scores of > top hits as a post-collection step by only advancing the scorer to hits that > made the top-N list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 698 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/698/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC 135 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest Error Message: Could not get the port for ZooKeeper server Stack Trace: java.lang.RuntimeException: Could not get the port for ZooKeeper server at __randomizedtesting.SeedInfo.seed([6CCB00D423468B2F]:0) at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:521) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:229) at org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:198) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest Error Message: Captured an uncaught exception in thread: Thread[id=507, name=Thread-86, state=RUNNABLE, group=TGRP-NoFacetTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=507, name=Thread-86, state=RUNNABLE, group=TGRP-NoFacetTest] Caused by: java.lang.RuntimeException: java.io.IOException: Unable to establish loopback connection at __randomizedtesting.SeedInfo.seed([6CCB00D423468B2F]:0) at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:498) Caused by: java.io.IOException: Unable to establish loopback connection at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:94) at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:61) at java.security.AccessController.doPrivileged(Native Method) at sun.nio.ch.PipeImpl.(PipeImpl.java:171) at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50) at java.nio.channels.Pipe.open(Pipe.java:155) at sun.nio.ch.WindowsSelectorImpl.(WindowsSelectorImpl.java:127) at sun.nio.ch.WindowsSelectorProvider.openSelector(WindowsSelectorProvider.java:44) at java.nio.channels.Selector.open(Selector.java:227) at org.apache.zookeeper.server.NIOServerCnxnFactory.(NIOServerCnxnFactory.java:56) at org.apache.solr.cloud.ZkTestServer$ZKServerMain$TestServerCnxnFactory.(ZkTestServer.java:249) at org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:309) at
[jira] [Commented] (LUCENE-8312) Leverage impacts for SynonymQuery
[ https://issues.apache.org/jira/browse/LUCENE-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548157#comment-16548157 ] Adrien Grand commented on LUCENE-8312: -- It looks like this caused the slowdown at http://people.apache.org/~mikemccand/lucenebench/TermDTSort.html. I'll look into it. > Leverage impacts for SynonymQuery > - > > Key: LUCENE-8312 > URL: https://issues.apache.org/jira/browse/LUCENE-8312 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-8312.patch, LUCENE-8312.patch > > > Now that we expose raw impacts, we could leverage them for synonym queries. > It would be a matter of summing up term frequencies for each unique norm > value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8412) Remove trackDocScores from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548112#comment-16548112 ] Adrien Grand commented on LUCENE-8412: -- Here is a patch. It removes trackDocScores from TopFieldCollector, adds a new TopFieldCollector#populateScores utility method and uses it in call sites where trackDocScores was set to true. In addition to improved performance, I like that this change combined with LUCENE-8405 and LUCENE-8411 make TopFieldCollector factory methods look better and the bodies of the collect() methods easier to read. > Remove trackDocScores from TopFieldCollector factory methods > > > Key: LUCENE-8412 > URL: https://issues.apache.org/jira/browse/LUCENE-8412 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8412.patch > > > Computing scores on top hits is fine, but the current way it is implemented - > at collection time - requires to read/decode more freqs/norms and compute > more scores than necessary. It would be more efficient to compute scores of > top hits as a post-collection step by only advancing the scorer to hits that > made the top-N list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8412) Remove trackDocScores from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-8412: - Attachment: LUCENE-8412.patch > Remove trackDocScores from TopFieldCollector factory methods > > > Key: LUCENE-8412 > URL: https://issues.apache.org/jira/browse/LUCENE-8412 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8412.patch > > > Computing scores on top hits is fine, but the current way it is implemented - > at collection time - requires to read/decode more freqs/norms and compute > more scores than necessary. It would be more efficient to compute scores of > top hits as a post-collection step by only advancing the scorer to hits that > made the top-N list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match
[ https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548104#comment-16548104 ] Jim Ferenczi commented on LUCENE-8306: -- Would it be easier if getSubMatches returns null when matches are already leaves ? I look at this sub matches as a way to split a big interval so it should be possible to split at multiple levels. I don't think we should assume that the first level is a top level and the next level is always for terms, e.g. if the top level matches is already a term, getSubMatches should return null or EMPTY. > Allow iteration over the term positions of a Match > -- > > Key: LUCENE-8306 > URL: https://issues.apache.org/jira/browse/LUCENE-8306 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8306.patch, LUCENE-8306.patch, LUCENE-8306.patch > > > For multi-term queries such as phrase queries, the matches API currently just > returns information about the span of the whole match. It would be useful to > also expose information about the matching terms within the phrase. The same > would apply to Spans and Interval queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 990 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/990/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1070/consoleText [repro] Revision: 7d8fc543f01a31c8439c369b26c3c8c5924ba4ea [repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 [repro] Repro line: ant test -Dtestcase=CdcrBidirectionalTest -Dtests.method=testBiDir -Dtests.seed=6965D5CE79C79774 -Dtests.multiplier=2 -Dtests.locale=es-UY -Dtests.timezone=Asia/Pontianak -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: eafc9ffc6f90d267b87d35b795e89eaa7a60182a [repro] git fetch [repro] git checkout 7d8fc543f01a31c8439c369b26c3c8c5924ba4ea [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] CdcrBidirectionalTest [repro] ant compile-test [...truncated 3301 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CdcrBidirectionalTest" -Dtests.showOutput=onerror -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 -Dtests.seed=6965D5CE79C79774 -Dtests.multiplier=2 -Dtests.locale=es-UY -Dtests.timezone=Asia/Pontianak -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 2395 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest [repro] git checkout eafc9ffc6f90d267b87d35b795e89eaa7a60182a [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures
[ https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548098#comment-16548098 ] David Smiley commented on SOLR-10243: - SOLR-12561 replaces the API, adds tests, and thus fixes the problem. If SOLR-12561 doesn't land on 7x then I can mark this test method @Ignored for 7x. > Fix TestExtractionDateUtil.testParseDate sporadic failures > -- > > Key: SOLR-10243 > URL: https://issues.apache.org/jira/browse/SOLR-10243 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Solr Cell (Tika extraction) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Attachments: SimpleDateFormatTimeZoneBug.java > > > Jenkins test failure: > {{ant test -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate > -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv > -Dtests.timezone=America/Metlakatla -Dtests.asserts=true > -Dtests.file.encoding=UTF-8}} It reproduces on 6x for me but not master. > I reviewed this briefly and there seems to be a locale assumption in the test. > 1 tests failed. > FAILED: > org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate > Error Message: > Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 > 04:35:51 AKST 2008) > Stack Trace: > java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != > 1226579751000 (Thu Nov 13 04:35:51 AKST 2008) > at > __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0) > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at > org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59) > at > org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12561) Port ExtractionDateUtil to java.time API
[ https://issues.apache.org/jira/browse/SOLR-12561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548094#comment-16548094 ] David Smiley commented on SOLR-12561: - Patch: * Use java.time thoroughly; no java.text remnants nor use of Date. Always Instant. * Enhanced TestExtractionDateUtil a lot to be more thorough to test more of the supported patterns and their idiosyncrasies. I want to ensure we don't break back-compat here! These better tests helped uncovered some issues during development of this switch. * Two of the default patterns had a lowercase "hh" for hour of AM/PM instead of "HH" for hour of day. SimpleDateFormat seemed to deal with this but I think they are fundamentally invalid without an AM/PM qualifier. I switched them to HH. If someone custom configures the patterns in their solr config, they'll need to use the correct designator. * Use parsed DateTimeFormatter instances instead of Strings in SolrContentHandler and it's factory. Since order might be significant or might be used for performance reasons, I also switched to LinkedHashSet from HashSet for the impl in ExtractingRequestHandler's config parser. This seems safe for 7.x; any break would seem to be very obscure IMO. On the other hand, 8.0 will be out this fall or so. > Port ExtractionDateUtil to java.time API > > > Key: SOLR-12561 > URL: https://issues.apache.org/jira/browse/SOLR-12561 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Solr Cell (Tika extraction) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > Attachments: SOLR-12561.patch > > > The ExtractionDateUtil class in the extraction contrib uses > SimpleDateFormatter. The Java 8 java.time API is superior; you can find > articles out there why. One thing that comes to mind is less timezone > bugginess – SOLR-10243. Although the API may be a bit baroque IMO > (over-engineered). Here I'd like to switch over the API and furthermore have > the patterns be pre-parsed so that at runtime we don't need to re-parse the > patterns. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12561) Port ExtractionDateUtil to java.time API
[ https://issues.apache.org/jira/browse/SOLR-12561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12561: Attachment: SOLR-12561.patch > Port ExtractionDateUtil to java.time API > > > Key: SOLR-12561 > URL: https://issues.apache.org/jira/browse/SOLR-12561 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Solr Cell (Tika extraction) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > Attachments: SOLR-12561.patch > > > The ExtractionDateUtil class in the extraction contrib uses > SimpleDateFormatter. The Java 8 java.time API is superior; you can find > articles out there why. One thing that comes to mind is less timezone > bugginess – SOLR-10243. Although the API may be a bit baroque IMO > (over-engineered). Here I'd like to switch over the API and furthermore have > the patterns be pre-parsed so that at runtime we don't need to re-parse the > patterns. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12561) Port ExtractionDateUtil to java.time API
David Smiley created SOLR-12561: --- Summary: Port ExtractionDateUtil to java.time API Key: SOLR-12561 URL: https://issues.apache.org/jira/browse/SOLR-12561 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: contrib - Solr Cell (Tika extraction) Reporter: David Smiley Assignee: David Smiley Fix For: master (8.0) The ExtractionDateUtil class in the extraction contrib uses SimpleDateFormatter. The Java 8 java.time API is superior; you can find articles out there why. One thing that comes to mind is less timezone bugginess – SOLR-10243. Although the API may be a bit baroque IMO (over-engineered). Here I'd like to switch over the API and furthermore have the patterns be pre-parsed so that at runtime we don't need to re-parse the patterns. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors
[ https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548071#comment-16548071 ] David Smiley commented on SOLR-11770: - Just curious, what do people actually use TVRH for? Purely diagnostics or some greater production purpose? > NPE in tvrh if no field is specified and document doesn't contain any fields > with term vectors > -- > > Key: SOLR-11770 > URL: https://issues.apache.org/jira/browse/SOLR-11770 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SearchComponents - other >Affects Versions: 6.6.2 >Reporter: Nikolay Martynov >Assignee: Erick Erickson >Priority: Major > > It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and > document doesn't have any fields with term vectors then Solr returns NPE. > Request: > {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}. > On our 'old' schema we had some fields with {{termVectors}} and even more > fields with position data. In our new schema we tried to remove unused data > so we dropped a lot of position data and some term vectors. > Our documents are 'sparsely' populated - not all documents contain all fields. > Above request was returning fine for our 'old' schema and returns 500 for our > 'new' schema - on exactly same Solr (6.6.2). > Stack trace: > {code} > 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 > r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException >at > org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324) >at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296) >at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) >at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482) >at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) >at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) >at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) >at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) >at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) >at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) >at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) >at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) >at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) >at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) >at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) >at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) >at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) >at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) >at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) >at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) >at org.eclipse.jetty.server.Server.handle(Server.java:534) >at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) >at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) >at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) >at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) >at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) >at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) >at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) >at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) >at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) >at >
[jira] [Created] (LUCENE-8412) Remove trackDocScores from TopFieldCollector factory methods
Adrien Grand created LUCENE-8412: Summary: Remove trackDocScores from TopFieldCollector factory methods Key: LUCENE-8412 URL: https://issues.apache.org/jira/browse/LUCENE-8412 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Computing scores on top hits is fine, but the current way it is implemented - at collection time - requires to read/decode more freqs/norms and compute more scores than necessary. It would be more efficient to compute scores of top hits as a post-collection step by only advancing the scorer to hits that made the top-N list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1070 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1070/ No tests ran. Build Log: [...truncated 22957 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2240 links (1794 relative) to 3134 anchors in 246 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes -dist-keys: [get] Getting: http://home.apache.org/keys/group/lucene.asc [get] To: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file =
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22476 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22476/ Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:36903/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:43531/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:36903/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:43531/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([F51D257ADD48F910:5FD0F6886A9B2CC0]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7425 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7425/ Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC 223 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest Error Message: Could not get the port for ZooKeeper server Stack Trace: java.lang.RuntimeException: Could not get the port for ZooKeeper server at __randomizedtesting.SeedInfo.seed([C0FF984AEDA7DC2E]:0) at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:521) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:229) at org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:198) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:60) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest Error Message: Captured an uncaught exception in thread: Thread[id=370, name=Thread-42, state=RUNNABLE, group=TGRP-NoFacetTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=370, name=Thread-42, state=RUNNABLE, group=TGRP-NoFacetTest] Caused by: java.lang.RuntimeException: java.io.IOException: Unable to establish loopback connection at __randomizedtesting.SeedInfo.seed([C0FF984AEDA7DC2E]:0) at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:498) Caused by: java.io.IOException: Unable to establish loopback connection at java.base/sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:94) at java.base/sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:61) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/sun.nio.ch.PipeImpl.(PipeImpl.java:171) at java.base/sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50) at java.base/java.nio.channels.Pipe.open(Pipe.java:155) at java.base/sun.nio.ch.WindowsSelectorImpl.(WindowsSelectorImpl.java:127) at java.base/sun.nio.ch.WindowsSelectorProvider.openSelector(WindowsSelectorProvider.java:44) at java.base/java.nio.channels.Selector.open(Selector.java:227) at org.apache.zookeeper.server.NIOServerCnxnFactory.(NIOServerCnxnFactory.java:56) at org.apache.solr.cloud.ZkTestServer$ZKServerMain$TestServerCnxnFactory.(ZkTestServer.java:249) at
[jira] [Commented] (LUCENE-8401) Add PassageBuilder to help construct highlights using MatchesIterator
[ https://issues.apache.org/jira/browse/LUCENE-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547916#comment-16547916 ] David Smiley commented on LUCENE-8401: -- Yeah it's up to a pluggable passage formatter to either layer the fragments (for coloring) or effectively merge them. The latter (merge) is what I referred to in my last comment, and the source for that is here [https://github.com/dsmiley/lucene-solr/blob/0e2a579ad652b658e74f5c60da94e58a869e55de/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/DefaultPassageFormatter.java] which I'm thinking of committing in its own issue as it's not fundamental to Weight Matches in LUCENE-8286. > Add PassageBuilder to help construct highlights using MatchesIterator > - > > Key: LUCENE-8401 > URL: https://issues.apache.org/jira/browse/LUCENE-8401 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/highlighter >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8401.patch > > > Jim and I discussed a while back the idea of adding highlighter components, > rather than a fully-fledged highlighter, which would allow users to build > their own specialised highlighters. To that end, I'd like to add a > PassageBuilder class that uses the Matches API to break text up into passages > containing hits. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2343 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2343/ Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger Error Message: expected:<3> but was:<2> Stack Trace: java.lang.AssertionError: expected:<3> but was:<2> at __randomizedtesting.SeedInfo.seed([9B7248BEF25E1C0A:F8B97E3C6B916F27]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:112) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log:
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547839#comment-16547839 ] Uwe Schindler commented on LUCENE-2562: --- We cannot link to GPL so this is a bummer. I don't think JavaFX, unless shipped with the JDK is a viable option for Apache projects. I think we should ask ASF legal how to proceed? > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8407) Add SpanTermQuery.getTermStates()
[ https://issues.apache.org/jira/browse/LUCENE-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-8407. -- Resolution: Fixed Fix Version/s: 7.5 Note: in 7.x the method is getTermContext() > Add SpanTermQuery.getTermStates() > - > > Key: LUCENE-8407 > URL: https://issues.apache.org/jira/browse/LUCENE-8407 > Project: Lucene - Core > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: 7.5 > > > Adding a getTermStates() to the TermStates in a SpanTermQuery would be > useful, just like we have similarly for a TermQuery – LUCENE-8379. It would > be useful for LUCENE-6513 to avoid a needless inner ScoreTerm class when a > SpanTermQuery would suffice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8407) Add SpanTermQuery.getTermStates()
[ https://issues.apache.org/jira/browse/LUCENE-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547821#comment-16547821 ] ASF subversion and git services commented on LUCENE-8407: - Commit affe0cf354d703ac32ffc51c433d930b3fc23978 in lucene-solr's branch refs/heads/branch_7x from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=affe0cf ] LUCENE-8407: Add SpanTermQuery.getTermStates getter (getTermContext() for 7.x) (cherry picked from commit eafc9ff) > Add SpanTermQuery.getTermStates() > - > > Key: LUCENE-8407 > URL: https://issues.apache.org/jira/browse/LUCENE-8407 > Project: Lucene - Core > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > > Adding a getTermStates() to the TermStates in a SpanTermQuery would be > useful, just like we have similarly for a TermQuery – LUCENE-8379. It would > be useful for LUCENE-6513 to avoid a needless inner ScoreTerm class when a > SpanTermQuery would suffice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8407) Add SpanTermQuery.getTermStates()
[ https://issues.apache.org/jira/browse/LUCENE-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547810#comment-16547810 ] ASF subversion and git services commented on LUCENE-8407: - Commit eafc9ffc6f90d267b87d35b795e89eaa7a60182a in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eafc9ff ] LUCENE-8407: Add SpanTermQuery.getTermStates getter > Add SpanTermQuery.getTermStates() > - > > Key: LUCENE-8407 > URL: https://issues.apache.org/jira/browse/LUCENE-8407 > Project: Lucene - Core > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > > Adding a getTermStates() to the TermStates in a SpanTermQuery would be > useful, just like we have similarly for a TermQuery – LUCENE-8379. It would > be useful for LUCENE-6513 to avoid a needless inner ScoreTerm class when a > SpanTermQuery would suffice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 730 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/730/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild Error Message: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException Stack Trace: java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException at __randomizedtesting.SeedInfo.seed([DAADFCD426AC1155:5209E6B18C54437]:0) at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: junit.framework.AssertionFailedError: Unexpected
[jira] [Commented] (LUCENE-8410) Soft delete optimization never kicks in
[ https://issues.apache.org/jira/browse/LUCENE-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547793#comment-16547793 ] Nhat Nguyen commented on LUCENE-8410: - +1. LGTM. > Soft delete optimization never kicks in > --- > > Key: LUCENE-8410 > URL: https://issues.apache.org/jira/browse/LUCENE-8410 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8410.patch > > > SoftDeletesRetentionMergePolicy and SoftDeletesDirectoryReaderWrapper have an > optimized code path in case live docs implement FixedBitSet, which is never > true anymore since LUCENE-8309. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12495) Enhance the Autoscaling policy syntax to evenly distribute replicas among nodes
[ https://issues.apache.org/jira/browse/SOLR-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-12495. --- Resolution: Fixed Fix Version/s: 7.5 master (8.0) > Enhance the Autoscaling policy syntax to evenly distribute replicas among > nodes > --- > > Key: SOLR-12495 > URL: https://issues.apache.org/jira/browse/SOLR-12495 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: master (8.0), 7.5 > > > Support a new function value for {{replica= "#EQUAL"}} > {{#EQUAL}} means the equal no:of replicas on each {{"node"}} > the value of replica will be calculated as {{<= > Math.ceil(number_of_replicas/number_of_valid_nodes) }} > *example 1:* > {code:java} > {"replica" : "#EQUAL" , "shard" : "#EACH" , "node" : "#ANY"} > {code} > *case 1* : nodes=3, replicationFactor=4 > the value of replica will be calculated as {{Math.ceil(4/3) = 1.33}} > current state : nodes=3, replicationFactor=2 > this is equivalent to the hard coded rule > {code:java} > {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"} > {code} > *case 2* : > current state : nodes=3, replicationFactor=2 > this is equivalent to the hard coded rule > {code:java} > {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"} > {code} > *example:2* > {code} > {"replica" : "#EQUAL" , "node" : "#ANY"}{code} > *case 1*: numShards = 2, replicationFactor=3, nodes = 5 > this is equivalent to the hard coded rule > {code:java} > {"replica" : "1.2" , "node" : "#ANY"} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties
[ https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-12536: -- Component/s: AutoScaling > Enhance autoscaling policy to equally distribute replicas on the basis of > arbitrary properties > -- > > Key: SOLR-12536 > URL: https://issues.apache.org/jira/browse/SOLR-12536 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > *example:1* > {code:java} > {"replica" : "#EQUAL" , "shard" : "#EACH" , "port" : "#EACH" } > //if the ports are "8983", "7574", "7575", the above rule is equivalent to > {"replica" : "#EQUAL" , "shard" : "#EACH" , "port" : ["8983", "7574", > "7575"]}{code} > *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes > are divided into 3 buckets each containing nodes in that port. Each bucket > must contain {{3 * 2 /3 =2}} replicas > > *example : 2* > {code:java} > {"replica" : "#EQUAL" , "shard" : "#EACH" , "sysprop.zone" : "#EACH" } > //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent > to > {"replica" : "#EQUAL" , "shard" : "#EACH" , "sysprop.zone" : ["east_1", > "east_2", "west_1"]}{code} > The behavior is similar to example 1 , just that in this case we apply it to > a system property -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties
[ https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-12536: -- Issue Type: Improvement (was: Sub-task) Parent: (was: SOLR-12495) > Enhance autoscaling policy to equally distribute replicas on the basis of > arbitrary properties > -- > > Key: SOLR-12536 > URL: https://issues.apache.org/jira/browse/SOLR-12536 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > *example:1* > {code:java} > {"replica" : "#EQUAL" , "shard" : "#EACH" , "port" : "#EACH" } > //if the ports are "8983", "7574", "7575", the above rule is equivalent to > {"replica" : "#EQUAL" , "shard" : "#EACH" , "port" : ["8983", "7574", > "7575"]}{code} > *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes > are divided into 3 buckets each containing nodes in that port. Each bucket > must contain {{3 * 2 /3 =2}} replicas > > *example : 2* > {code:java} > {"replica" : "#EQUAL" , "shard" : "#EACH" , "sysprop.zone" : "#EACH" } > //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent > to > {"replica" : "#EQUAL" , "shard" : "#EACH" , "sysprop.zone" : ["east_1", > "east_2", "west_1"]}{code} > The behavior is similar to example 1 , just that in this case we apply it to > a system property -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12522) Support a runtime function `#ALL` for 'replica' in autoscaling policies
[ https://issues.apache.org/jira/browse/SOLR-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-12522. --- Resolution: Fixed Fix Version/s: 7.5 master (8.0) > Support a runtime function `#ALL` for 'replica' in autoscaling policies > --- > > Key: SOLR-12522 > URL: https://issues.apache.org/jira/browse/SOLR-12522 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: master (8.0), 7.5 > > > today we have to use the convoluted rule to place all TLOG replicas in nodes > with ssd disk > {code} > { "replica": 0, "type" : "TLOG" ,"diskType" : "!ssd" } > {code} > Ideally it should be expressed better as > {code} > { "replica": "#ALL", "type" : "TLOG" , "diskType" : "ssd"} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7557) Solrj doesn't handle child documents when using queryAndStreamResponse
[ https://issues.apache.org/jira/browse/SOLR-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547744#comment-16547744 ] Marvin Bredal Lillehaug commented on SOLR-7557: --- This is still present in 7.4.0. The above pull request point to some other issue, so I created a new: https://github.com/apache/lucene-solr/pull/421 > Solrj doesn't handle child documents when using queryAndStreamResponse > -- > > Key: SOLR-7557 > URL: https://issues.apache.org/jira/browse/SOLR-7557 > Project: Solr > Issue Type: Bug > Components: SolrJ >Affects Versions: 5.4 >Reporter: Stian Østerhaug >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Getting the results from a query that as child documents results in "parsing > error" when using SolrClient's queryAndStreamResponse(), but not when using > the regular query() method. > The bug is caused by StreamingBinaryResponseParser's JavaBinCodec override of > readSolrDocument which always returns null (the document is instead passed to > the callback), but JavaBinCodec uses the method to also decode child > documents. > I have fixed this by using a simple nestedLevel variable that can be used to > detect if the document being parsed is a child (and should be returned) or a > parent (and should be passed to the callback). > The fix feels kinda hacky, but it is simple. Link to a Github pull request to > follow. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #421: SOLR-7557: Fix parsing of child documents usi...
GitHub user computerlove opened a pull request: https://github.com/apache/lucene-solr/pull/421 SOLR-7557: Fix parsing of child documents using queryAndStreamResponse You can merge this pull request into a Git repository by running: $ git pull https://github.com/nvdb-vegdata/lucene-solr master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/421.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #421 commit 5d92f332dd42504485d85b7399f8bef7b3e02375 Author: Marvin Bredal Lillehaug Date: 2018-07-18T12:21:51Z SOLR-7557: Fix parsing of child documents using queryAndStreamResponse --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 18 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/18/ 13 tests failed. FAILED: org.apache.solr.cloud.RestartWhileUpdatingTest.test Error Message: There are still nodes recoverying - waited for 320 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 320 seconds at __randomizedtesting.SeedInfo.seed([3652887FA814C23B:BE06B7A506E8AFC3]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:921) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1478) at org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547733#comment-16547733 ] Robert Muir commented on LUCENE-2562: - I also found this issue related to the java 11 changes: https://github.com/javafxports/openjdk-jfx/issues/52 Of course there are licensing aspects to think of too? > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match
[ https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547693#comment-16547693 ] Alan Woodward commented on LUCENE-8306: --- Thanks for the review David! Re label() : yes, that's a good idea, I'll change it to return the leaf query Re SloppyPhraseMatcher, you're correct, I found some bugs there. We can't work directly on PhrasePositions because the submatches need to be returned in order, although I suppose it might be simpler to just have a sorted array of ords. I'll work something up. Re TermMatchesIterator.getSubMatches(), I've gone back and forth on this. One the one hand, yes, it's already a leaf, but on the other I think it will be simpler for consumers to just assume that the top level matches are for intervals and the next level are for terms, so for example you can always call getSubMatches() to find individual terms when highlighting. > Allow iteration over the term positions of a Match > -- > > Key: LUCENE-8306 > URL: https://issues.apache.org/jira/browse/LUCENE-8306 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8306.patch, LUCENE-8306.patch, LUCENE-8306.patch > > > For multi-term queries such as phrase queries, the matches API currently just > returns information about the span of the whole match. It would be useful to > also expose information about the matching terms within the phrase. The same > would apply to Spans and Interval queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node
[ https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547692#comment-16547692 ] Shalin Shekhar Mangar commented on SOLR-11990: -- I added more changes on top of Noble's last patch and pushed it to a branch {{jira/solr-11990}}. Changes: # Removed nocommit in CreateCollectionCmd. Now we do all input validations before we try to assign replicas so that in case the collection could not be created, we are not left with dangling collection references in the cluster state # Added more tests in {{TestWithCollection.testNodeAdded}} and {{TestPolicy.testWithColl}} > Make it possible to co-locate replicas of multiple collections together in a > node > - > > Key: SOLR-11990 > URL: https://issues.apache.org/jira/browse/SOLR-11990 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, > SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch > > > It is necessary to co-locate replicas of different collection together in a > node when cross-collection joins are performed. > while creating a collection specify the parameter > {{withCollection=other-collection-name}} . This ensure that Solr always > ensure that atleast one replica of {{other-cllection}} is present with this > collection replicas > This requires changing create collection, create shard and add replica APIs > as well because we want a replica of collection A to be created first before > a replica of collection B is created so that join queries etc are always > possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node
[ https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-11990: - Summary: Make it possible to co-locate replicas of multiple collections together in a node (was: Make it possible to co-locate replicas of multiple collections together in a node using policy) > Make it possible to co-locate replicas of multiple collections together in a > node > - > > Key: SOLR-11990 > URL: https://issues.apache.org/jira/browse/SOLR-11990 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, > SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch > > > It is necessary to co-locate replicas of different collection together in a > node when cross-collection joins are performed. > while creating a collection specify the parameter > {{withCollection=other-collection-name}} . This ensure that Solr always > ensure that atleast one replica of {{other-cllection}} is present with this > collection replicas > This requires changing create collection, create shard and add replica APIs > as well because we want a replica of collection A to be created first before > a replica of collection B is created so that join queries etc are always > possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8411) Remove fillFields from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-8411. -- Resolution: Fixed Fix Version/s: master (8.0) > Remove fillFields from TopFieldCollector factory methods > > > Key: LUCENE-8411 > URL: https://issues.apache.org/jira/browse/LUCENE-8411 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-8411.patch > > > Adding sort values to FieldDoc instances is cheap, so there is no reason to > disable it. It's also important eg. to merge top hits from multiple shards > with TopDocs#merge. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8411) Remove fillFields from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547686#comment-16547686 ] ASF subversion and git services commented on LUCENE-8411: - Commit 7d8fc543f01a31c8439c369b26c3c8c5924ba4ea in lucene-solr's branch refs/heads/master from [~jpountz] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d8fc54 ] LUCENE-8411: Remove fillFields from TopFieldCollector factory methods. > Remove fillFields from TopFieldCollector factory methods > > > Key: LUCENE-8411 > URL: https://issues.apache.org/jira/browse/LUCENE-8411 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-8411.patch > > > Adding sort values to FieldDoc instances is cheap, so there is no reason to > disable it. It's also important eg. to merge top hits from multiple shards > with TopDocs#merge. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
[ https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547635#comment-16547635 ] Cao Manh Dat edited comment on SOLR-12297 at 7/18/18 10:03 AM: --- My idea for introducing http2 to Solr is # Slowly introduce http2 support to *test environment* only (by setting System.property variables) therefore users can't use any features of http2 in solr server build. # All public classes belong to http2 must be annotated as {{@experimental}} # When all the features are mature enough we will do a full switch from old http1 to http2 # All commits are done on master/branch_7x instead of baking a temporary branch (ie: http2 branch), reasons ** It will the merge a lot easier since we seem to touch frequent update classes ** It will force us to commit well-test code ** It will force us to not abandon it ** We can leverage current jenkins (as well as their reports) for knowing the status of new tests for http2 *Note (need an agreement on this*): The only downside of this approach is solrj and solr server will include several jars that they do not need (around 2 mb in total for both). was (Author: caomanhdat): My idea for introducing http2 to Solr is # Slowly introduce http2 support to *test environment* only (by setting System.property variables) therefore users can't use any features of http2 in solr server build. # All public classes belong to http2 must be annotated as {{@experimental}} # When all the features are mature enough we will do a full switch from old http1 to http2 # All commits are done on master/branch_7x instead of baking a temporary branch (ie: http2 branch), reasons ** It will the merge a lot easier since we seem to touch frequent update classes ** It will force us to commit well-test code ** It will force us to not abandon it ** We can leverage current jenkins (as well as their reports) for knowing the status of new tests for http2 The only downside of this approach is solrj and solr server will include several jars that they do not need (around 2 mb in total for both). > Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests. > > > Key: SOLR-12297 > URL: https://issues.apache.org/jira/browse/SOLR-12297 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > Attachments: SOLR-12297.patch, SOLR-12297.patch, > starburst-ivy-fixes.patch > > > Blocking or async support as well as HTTP2 compatible with multiplexing. > Once it supports enough and is stable, replace internal usage, allowing > async, and eventually move to HTTP2 connector and allow multiplexing. Could > support HTTP1.1 and HTTP2 on different ports depending on state of the world > then. > The goal of the client itself is to work against HTTP1.1 or HTTP2 with > minimal or no code path differences and the same for async requests (should > initially work for both 1.1 and 2 and share majority of code). > The client should also be able to replace HttpSolrClient and plug into the > other clients the same way. > I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually > though. > I evaluated some clients and while there are a few options, I went with > Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 > beta) and we would have to update to a new API for Apache HttpClient anyway. > Meanwhile, the Jetty guys have been very supportive of helping Solr with any > issues and I like having the client and server from the same project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
[ https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547635#comment-16547635 ] Cao Manh Dat commented on SOLR-12297: - My idea for introducing http2 to Solr is # Slowly introduce http2 support to *test environment* only (by setting System.property variables) therefore users can't use any features of http2 in solr server build. # All public classes belong to http2 must be annotated as {{@experimental}} # When all the features are mature enough we will do a full switch from old http1 to http2 # All commits are done on master/branch_7x instead of baking a temporary branch (ie: http2 branch), reasons ** It will the merge a lot easier since we seem to touch frequent update classes ** It will force us to commit well-test code ** It will force us to not abandon it ** We can leverage current jenkins (as well as their reports) for knowing the status of new tests for http2 The only downside of this approach is solrj and solr server will include several jars that they do not need (around 2 mb in total for both). > Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests. > > > Key: SOLR-12297 > URL: https://issues.apache.org/jira/browse/SOLR-12297 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > Attachments: SOLR-12297.patch, SOLR-12297.patch, > starburst-ivy-fixes.patch > > > Blocking or async support as well as HTTP2 compatible with multiplexing. > Once it supports enough and is stable, replace internal usage, allowing > async, and eventually move to HTTP2 connector and allow multiplexing. Could > support HTTP1.1 and HTTP2 on different ports depending on state of the world > then. > The goal of the client itself is to work against HTTP1.1 or HTTP2 with > minimal or no code path differences and the same for async requests (should > initially work for both 1.1 and 2 and share majority of code). > The client should also be able to replace HttpSolrClient and plug into the > other clients the same way. > I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually > though. > I evaluated some clients and while there are a few options, I went with > Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 > beta) and we would have to update to a new API for Apache HttpClient anyway. > Meanwhile, the Jetty guys have been very supportive of helping Solr with any > issues and I like having the client and server from the same project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8400) Make BytesRefHash.compact public and @lucene.internal
[ https://issues.apache.org/jira/browse/LUCENE-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-8400. Resolution: Fixed Fix Version/s: 7.5 master (8.0) > Make BytesRefHash.compact public and @lucene.internal > - > > Key: LUCENE-8400 > URL: https://issues.apache.org/jira/browse/LUCENE-8400 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless >Priority: Major > Fix For: master (8.0), 7.5 > > > {{BytesRefHash}} is an internal Lucene class, essentially providing a compact > representation of {{Map}}. It already has a public {{sort}} > method, which destructively compacts all entries and returns them, but its > {{compact}} method (which compacts all entries without the cost of sorting) > is not public ... I'd like to make it public, and also mark it > {{@lucene.internal}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8400) Make BytesRefHash.compact public and @lucene.internal
[ https://issues.apache.org/jira/browse/LUCENE-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547632#comment-16547632 ] ASF subversion and git services commented on LUCENE-8400: - Commit decab011212425b70f31149eac0a273a8b769707 in lucene-solr's branch refs/heads/branch_7x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=decab01 ] LUCENE-8400: make BytesRefHash.compact public > Make BytesRefHash.compact public and @lucene.internal > - > > Key: LUCENE-8400 > URL: https://issues.apache.org/jira/browse/LUCENE-8400 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless >Priority: Major > Fix For: master (8.0), 7.5 > > > {{BytesRefHash}} is an internal Lucene class, essentially providing a compact > representation of {{Map}}. It already has a public {{sort}} > method, which destructively compacts all entries and returns them, but its > {{compact}} method (which compacts all entries without the cost of sorting) > is not public ... I'd like to make it public, and also mark it > {{@lucene.internal}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8400) Make BytesRefHash.compact public and @lucene.internal
[ https://issues.apache.org/jira/browse/LUCENE-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547627#comment-16547627 ] ASF subversion and git services commented on LUCENE-8400: - Commit a2f113c5c65ad2a9cacc45a397f4e33eb403cadb in lucene-solr's branch refs/heads/master from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a2f113c ] LUCENE-8400: make BytesRefHash.compact public > Make BytesRefHash.compact public and @lucene.internal > - > > Key: LUCENE-8400 > URL: https://issues.apache.org/jira/browse/LUCENE-8400 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless >Priority: Major > > {{BytesRefHash}} is an internal Lucene class, essentially providing a compact > representation of {{Map}}. It already has a public {{sort}} > method, which destructively compacts all entries and returns them, but its > {{compact}} method (which compacts all entries without the cost of sorting) > is not public ... I'd like to make it public, and also mark it > {{@lucene.internal}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8407) Add SpanTermQuery.getTermStates()
[ https://issues.apache.org/jira/browse/LUCENE-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547624#comment-16547624 ] Michael McCandless commented on LUCENE-8407: +1 > Add SpanTermQuery.getTermStates() > - > > Key: LUCENE-8407 > URL: https://issues.apache.org/jira/browse/LUCENE-8407 > Project: Lucene - Core > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > > Adding a getTermStates() to the TermStates in a SpanTermQuery would be > useful, just like we have similarly for a TermQuery – LUCENE-8379. It would > be useful for LUCENE-6513 to avoid a needless inner ScoreTerm class when a > SpanTermQuery would suffice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8409) Cannot delete individual fields from index
[ https://issues.apache.org/jira/browse/LUCENE-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547619#comment-16547619 ] Michael McCandless commented on LUCENE-8409: You should be able to make a {{FilterDirectoryReader}} that "pretends" your single valued field was in fact multi valued, then wrap that in {{SlowCodecReaderWrapper}} then pass that to {{IndexWriter.addIndexes}}? It will be quite costly though, since it's a full rewrite of the index. > Cannot delete individual fields from index > -- > > Key: LUCENE-8409 > URL: https://issues.apache.org/jira/browse/LUCENE-8409 > Project: Lucene - Core > Issue Type: Bug >Reporter: Cetra Free >Priority: Major > > This is based upon the following tickets: > https://issues.apache.org/jira/browse/SOLR-12185 > https://issues.apache.org/jira/browse/LUCENE-8235 > I'd like a way to be able to clear and recreate a specific field so I don't > have to completely reindex if I change a field type. > It's a real pain if you change a specific field from single valued to > multivalued, you have to delete the entire index from disk and start from > scratch. > As being able to modify a field is not an [intended > feature|https://issues.apache.org/jira/browse/LUCENE-8235?focusedCommentId=16530918=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16530918], > It'd be preferable if a field could be at least deleted and recreated to > deal with this scenario. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8411) Remove fillFields from TopFieldCollector factory methods
[ https://issues.apache.org/jira/browse/LUCENE-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547615#comment-16547615 ] Michael McCandless commented on LUCENE-8411: +1 > Remove fillFields from TopFieldCollector factory methods > > > Key: LUCENE-8411 > URL: https://issues.apache.org/jira/browse/LUCENE-8411 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8411.patch > > > Adding sort values to FieldDoc instances is cheap, so there is no reason to > disable it. It's also important eg. to merge top hits from multiple shards > with TopDocs#merge. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights
[ https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547613#comment-16547613 ] Michael McCandless commented on LUCENE-8343: Thank you [~alessandro.benedetti]! > BlendedInfixSuggester bad score calculus for certain suggestion weights > --- > > Key: LUCENE-8343 > URL: https://issues.apache.org/jira/browse/LUCENE-8343 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 7.3.1 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch, > LUCENE-8343.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the BlendedInfixSuggester return a (long) score to rank the > suggestions. > This score is calculated as a multiplication between : > long *Weight* : the suggestion weight, coming from a document field, it can > be any long value ( including 1, 0,.. ) > double *Coefficient* : 0<=x<=1, calculated based on the position match, > earlier the better > The resulting score is a long, which means that at the moment, any weight<10 > can bring inconsistencies. > *Edge cases* > Weight =1 > Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for > any other match) > Weight =0 > Score = 0 ( independently of the position match coefficient) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights
[ https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated LUCENE-8343: - Attachment: LUCENE-8343.patch > BlendedInfixSuggester bad score calculus for certain suggestion weights > --- > > Key: LUCENE-8343 > URL: https://issues.apache.org/jira/browse/LUCENE-8343 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 7.3.1 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch, > LUCENE-8343.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the BlendedInfixSuggester return a (long) score to rank the > suggestions. > This score is calculated as a multiplication between : > long *Weight* : the suggestion weight, coming from a document field, it can > be any long value ( including 1, 0,.. ) > double *Coefficient* : 0<=x<=1, calculated based on the position match, > earlier the better > The resulting score is a long, which means that at the moment, any weight<10 > can bring inconsistencies. > *Edge cases* > Weight =1 > Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for > any other match) > Weight =0 > Score = 0 ( independently of the position match coefficient) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights
[ https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547600#comment-16547600 ] Alessandro Benedetti commented on LUCENE-8343: -- Hi [~mikemccand], I just merged the branch with the latest master, no conflicts : [https://github.com/apache/lucene-solr/pull/398] I am pushing now the patch here in the way all tests will be executed by Jenkins automatically. If this doesn't work in the next day or so, I will proceed doing the tests locally as well. > BlendedInfixSuggester bad score calculus for certain suggestion weights > --- > > Key: LUCENE-8343 > URL: https://issues.apache.org/jira/browse/LUCENE-8343 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 7.3.1 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch, > LUCENE-8343.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the BlendedInfixSuggester return a (long) score to rank the > suggestions. > This score is calculated as a multiplication between : > long *Weight* : the suggestion weight, coming from a document field, it can > be any long value ( including 1, 0,.. ) > double *Coefficient* : 0<=x<=1, calculated based on the position match, > earlier the better > The resulting score is a long, which means that at the moment, any weight<10 > can bring inconsistencies. > *Edge cases* > Weight =1 > Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for > any other match) > Weight =0 > Score = 0 ( independently of the position match coefficient) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11431) Leader candidate cannot become leader if replica responds 500 to PeerSync
[ https://issues.apache.org/jira/browse/SOLR-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547567#comment-16547567 ] Peter Somogyi commented on SOLR-11431: -- [~markrmil...@gmail.com], could you check the pull request I posted? > Leader candidate cannot become leader if replica responds 500 to PeerSync > - > > Key: SOLR-11431 > URL: https://issues.apache.org/jira/browse/SOLR-11431 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.0 >Reporter: Mano Kovacs >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > When leader candidate does PeerSync to all replicas, to download any missing > updates, it is tolerant to failures. It uses {{cantReachIsSuccess=true}} > switch which handles connection issue, 404 and 503 as success, since replicas > being DOWN should not affect the process. > However, if a replica has disk issues, the core initialization might fail and > that results in {{500}} instead of {{503}}. I failing replica like that can > prevent any other replicas becoming the leader. > Proposing either: > * Accepting {{500}} as "cant reach" so leader candidate can go on > or > * Changing {{SolrCoreInitializationException}} to return {{503}} instead of > {{500}} > * * this might be API change, however -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8411) Remove fillFields from TopFieldCollector factory methods
Adrien Grand created LUCENE-8411: Summary: Remove fillFields from TopFieldCollector factory methods Key: LUCENE-8411 URL: https://issues.apache.org/jira/browse/LUCENE-8411 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Adding sort values to FieldDoc instances is cheap, so there is no reason to disable it. It's also important eg. to merge top hits from multiple shards with TopDocs#merge. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-9.0.4) - Build # 66 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/66/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC 11 tests failed. FAILED: org.apache.solr.cloud.TestPullReplica.testCreateDelete {seed=[E569700A303A0A92:4A2F3DBAF208398E]} Error Message: Stack Trace: java.lang.NullPointerException at org.apache.solr.cloud.TestPullReplica.testCreateDelete(TestPullReplica.java:174) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypes Error Message: unexpected shard state expected: but was: Stack Trace: java.lang.AssertionError: unexpected shard state expected: but was: at __randomizedtesting.SeedInfo.seed([E569700A303A0A92:5DAA24AACCE1DFE7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at