[jira] [Created] (LUCENE-6640) Remove dependency of lucene/suggest on oal.search.Filter
Adrien Grand created LUCENE-6640: Summary: Remove dependency of lucene/suggest on oal.search.Filter Key: LUCENE-6640 URL: https://issues.apache.org/jira/browse/LUCENE-6640 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Assignee: Adrien Grand oal.search.Filter is on the way out, yet the suggest module is still using it in order to filter suggestions and it can't be replaced with queries given that true (out-of-order) random access is required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7736) Add a test for ZkController.publishAndWaitForDownStates
Shalin Shekhar Mangar created SOLR-7736: --- Summary: Add a test for ZkController.publishAndWaitForDownStates Key: SOLR-7736 URL: https://issues.apache.org/jira/browse/SOLR-7736 Project: Solr Issue Type: Test Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 5.3, Trunk Add a test for ZkController.publishAndWaitForDownStates so that bugs like SOLR-6665 do not occur again. A test exists but it is not correct and currently disabled via AwaitsFix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7736) Add a test for ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607980#comment-14607980 ] ASF subversion and git services commented on SOLR-7736: --- Commit 1688406 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688406 ] SOLR-7736: Point to the correct jira issue for awaits fix Add a test for ZkController.publishAndWaitForDownStates --- Key: SOLR-7736 URL: https://issues.apache.org/jira/browse/SOLR-7736 Project: Solr Issue Type: Test Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 5.3, Trunk Add a test for ZkController.publishAndWaitForDownStates so that bugs like SOLR-6665 do not occur again. A test exists but it is not correct and currently disabled via AwaitsFix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6641) Idea CodeSyle should be enriched
[ https://issues.apache.org/jira/browse/LUCENE-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607984#comment-14607984 ] Michael McCandless commented on LUCENE-6641: bq. We should define a complete standard Lucene uses Sun's java coding conventions, apparently moved here: http://www.oracle.com/technetwork/java/codeconvtoc-136057.html With one exception: 2 space indent, not 4. Idea CodeSyle should be enriched Key: LUCENE-6641 URL: https://issues.apache.org/jira/browse/LUCENE-6641 Project: Lucene - Core Issue Type: Improvement Components: -tools Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Labels: codestyle Attachments: Eclipse-Lucene-Codestyle.xml Currently Idea CodeStyle has been fixed for latest intelljIdea version but it is not complete. For example it does not contain spaces management ( space within method params, space between operators ext )for the Java language ( and maybe for the other languages involved as well ) . We should define a complete standard ( as some inconsistencies are in the committed code as well) . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [CI] Lucene 5x Linux 64 Test Only - Build # 53666 - Failure!
Aha! This is from a new assert I recently committed to try to understand a prior build failure. I cannot explain how current could possibly be null in this code. I'll open an issue... Mike McCandless On Tue, Jun 30, 2015 at 4:33 AM, bu...@elastic.co wrote: *BUILD FAILURE* Build URL http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/ Project:lucene_linux_java8_64_test_only Randomization: JDKEA9,network,heap[1024m],-server +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Date of build:Tue, 30 Jun 2015 10:30:40 +0200 Build duration:2 min 29 sec *CHANGES* No Changes *BUILD ARTIFACTS* checkout/lucene/build/core/test/temp/junit4-J0-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J0-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J1-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J1-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J2-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J2-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J3-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J3-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J4-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J4-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J5-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J5-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J6-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J6-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J7-20150630_103101_128.events http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/artifact/checkout/lucene/build/core/test/temp/junit4-J7-20150630_103101_128.events *FAILED JUNIT TESTS* Name: org.apache.lucene.index Failed: 1 test(s), Passed: 787 test(s), Skipped: 23 test(s), Total: 811 test(s) *Failed: org.apache.lucene.index.TestIndexWriterMerging.testForceMergeDeletes * *CONSOLE OUTPUT* [...truncated 1788 lines...] [junit4] Tests with failures: [junit4] - org.apache.lucene.index.TestIndexWriterMerging.testForceMergeDeletes [junit4] [junit4] [junit4] JVM J0: 1.31 .. 122.99 = 121.69s [junit4] JVM J1: 1.11 .. 127.08 = 125.97s [junit4] JVM J2: 1.13 .. 123.78 = 122.65s [junit4] JVM J3: 1.11 .. 122.06 = 120.95s [junit4] JVM J4: 1.12 .. 123.38 = 122.26s [junit4] JVM J5: 1.12 .. 122.72 = 121.60s [junit4] JVM J6: 1.12 .. 122.63 = 121.51s [junit4] JVM J7: 1.31 .. 122.88 = 121.57s [junit4] Execution time total: 2 minutes 7 seconds [junit4] Tests summary: 402 suites, 3269 tests, 1 failure, 49 ignored (45 assumptions) BUILD FAILED /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50: The following error occurred while executing this line: /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444: The following error occurred while executing this line: /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:999: There were test failures: 402 suites, 3269 tests, 1 failure, 49 ignored (45 assumptions) Total time: 2 minutes 15 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results [description-setter] Description set: JDKEA9,network,heap[1024m],-server +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Email was triggered for: Failure - 1st Trigger Failure - Any was overridden by another trigger and will not send an email. Trigger Failure - Still was overridden by another trigger and will not send an email. Sending email for trigger: Failure - 1st
[jira] [Commented] (LUCENE-6514) 'ant idea' no longer includes code style settings
[ https://issues.apache.org/jira/browse/LUCENE-6514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607932#comment-14607932 ] Alessandro Benedetti commented on LUCENE-6514: -- Thanks Steve, let me open the new issue ! https://issues.apache.org/jira/browse/LUCENE-6641 Can we discuss there the standard, for me it is no problem to contribute it, but actually I don't know where to find the exhaustive standard currently ! Cheers 'ant idea' no longer includes code style settings - Key: LUCENE-6514 URL: https://issues.apache.org/jira/browse/LUCENE-6514 Project: Lucene - Core Issue Type: Bug Components: -tools Reporter: Steve Rowe Assignee: Steve Rowe Priority: Minor Fix For: 5.3 Attachments: LUCENE-6514.patch Fairly recently, with modern versions of IntelliJ (14+ ?), code style settings are no longer included when you run {{ant idea}}, so you get the defaults (4 spaces per indent level rather than 2, etc.) The {{.idea/projectCodeStyle.xml}} file is still there, so something has changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7673) Race condition in shard splitting waiting for sub-shard replicas to go into recovery
[ https://issues.apache.org/jira/browse/SOLR-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607939#comment-14607939 ] ASF subversion and git services commented on SOLR-7673: --- Commit 1688396 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1688396 ] SOLR-7673: Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active Race condition in shard splitting waiting for sub-shard replicas to go into recovery Key: SOLR-7673 URL: https://issues.apache.org/jira/browse/SOLR-7673 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.4, 5.2 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 5.3, Trunk Attachments: Lucene-Solr-Tests-trunk-Java8-69.log, SOLR-7673.patch I found this while digging into the failure on https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/69/ {code} [junit4] 2 390032 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ReplicaMutator Update state numShards=1 message={ [junit4] 2 core:testasynccollectioncreation_shard1_0_replica2, [junit4] 2 core_node_name:core_node5, [junit4] 2 roles:null, [junit4] 2 base_url:http://127.0.0.1:38702;, [junit4] 2 node_name:127.0.0.1:38702_, [junit4] 2 numShards:1, [junit4] 2 state:active, [junit4] 2 shard:shard1_0, [junit4] 2 collection:testasynccollectioncreation, [junit4] 2 operation:state} [junit4] 2 390033 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ZkStateWriter going to update_collection /collections/testasynccollectioncreation/state.json version: 18 [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.u.UpdateLog Took 2 ms to seed version buckets with highest version 1503803851028824064 [junit4] 2 390033 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.c.RecoveryStrategy Finished recovery process. [junit4] 2 390033 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390034 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390034 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390298 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.h.a.CollectionsHandler Invoked Collection Action :requeststatus with paramsrequestid=1004action=REQUESTSTATUSwt=javabinversion=2 [junit4] 2 390299 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={requestid=1004action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.h.a.CoreAdminHandler Checking request status for : 10042599422118342586 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={qt=/admin/coresrequestid=10042599422118342586action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (OverseerThreadFactory-891-thread-4-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_] o.a.s.c.OverseerCollectionProcessor Asking sub shard leader to wait for: testasynccollectioncreation_shard1_0_replica2 to be alive on: 127.0.0.1:38702_ [junit4]
[jira] [Updated] (LUCENE-6641) Idea CodeSyle should be enriched
[ https://issues.apache.org/jira/browse/LUCENE-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated LUCENE-6641: - Attachment: Eclipse-Lucene-Codestyle.xml I found the Eclipse CodeStyle. Apparently spacing related ti wants : - private int test(int a,int b) - no spaces in the declaration - expanded.equals(n) - no spaces in invocation And I find a lot of cases that satisfy this scenario. If we agree that, probably the reason it was not in the codeStyle.xml is that Intellij Idea put it by default ! Idea CodeSyle should be enriched Key: LUCENE-6641 URL: https://issues.apache.org/jira/browse/LUCENE-6641 Project: Lucene - Core Issue Type: Improvement Components: -tools Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Labels: codestyle Attachments: Eclipse-Lucene-Codestyle.xml Currently Idea CodeStyle has been fixed for latest intelljIdea version but it is not complete. For example it does not contain spaces management ( space within method params, space between operators ext )for the Java language ( and maybe for the other languages involved as well ) . We should define a complete standard ( as some inconsistencies are in the committed code as well) . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6641) Idea CodeSyle should be enriched
Alessandro Benedetti created LUCENE-6641: Summary: Idea CodeSyle should be enriched Key: LUCENE-6641 URL: https://issues.apache.org/jira/browse/LUCENE-6641 Project: Lucene - Core Issue Type: Improvement Components: -tools Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Currently Idea CodeStyle has been fixed for latest intelljIdea version but it is not complete. For example it does not contain spaces management ( space within method params, space between operators ext )for the Java language ( and maybe for the other languages involved as well ) . We should define a complete standard ( as some inconsistencies are in the committed code as well) . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names
[ https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7143: --- Attachment: SOLR-7143.patch MoreLikeThis Query Parser does not handle multiple field names -- Key: SOLR-7143 URL: https://issues.apache.org/jira/browse/SOLR-7143 Project: Solr Issue Type: Bug Components: query parsers Affects Versions: 5.0 Reporter: Jens Wille Assignee: Anshum Gupta Attachments: SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return any results when supplied with multiple fields in the {{qf}} parameter. To reproduce within the techproducts example, compare: {code} curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A' curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A' curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A' {code} The first two queries return 8 and 5 results, respectively. The third query doesn't return any results (not even the matched document). In contrast, the MoreLikeThis Handler works as expected (accounting for the default {{mintf}} and {{mindf}} values in SimpleMLTQParser): {code} curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=namemlt.mintf=1mlt.mindf=1' curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=featuresmlt.mintf=1mlt.mindf=1' curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=name,featuresmlt.mintf=1mlt.mindf=1' {code} After adding the following line to {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}: {code:language=XML} requestHandler name=/mlt class=solr.MoreLikeThisHandler / {code} The first two queries return 7 and 4 results, respectively (excluding the matched document). The third query returns 7 results, as one would expect. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6640) Remove dependency of lucene/suggest on oal.search.Filter
[ https://issues.apache.org/jira/browse/LUCENE-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6640: - Attachment: LUCENE-6640.patch Here is a proposal: add a new API called {{BitsProducer}} with the following definition: {code} /** A producer of {@link Bits} per segment. */ public abstract class BitsProducer { /** Return {@link Bits} for the given leaf. The returned instance must * be non-null and have a {@link Bits#length() length} equal to * {@link LeafReader#maxDoc() maxDoc}. */ public abstract Bits getBits(LeafReaderContext context) throws IOException; } {code} And use it in lieu of Filter. Tests pass and I think it has the benefit of making the API a bit less trappy in the sense that it's now obvious that random-access is required? Remove dependency of lucene/suggest on oal.search.Filter Key: LUCENE-6640 URL: https://issues.apache.org/jira/browse/LUCENE-6640 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Assignee: Adrien Grand Attachments: LUCENE-6640.patch oal.search.Filter is on the way out, yet the suggest module is still using it in order to filter suggestions and it can't be replaced with queries given that true (out-of-order) random access is required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7673) Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active
[ https://issues.apache.org/jira/browse/SOLR-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-7673: Summary: Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active (was: Race condition in shard splitting waiting for sub-shard replicas to go into recovery) Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active --- Key: SOLR-7673 URL: https://issues.apache.org/jira/browse/SOLR-7673 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.4, 5.2 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 5.3, Trunk Attachments: Lucene-Solr-Tests-trunk-Java8-69.log, SOLR-7673.patch I found this while digging into the failure on https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/69/ {code} [junit4] 2 390032 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ReplicaMutator Update state numShards=1 message={ [junit4] 2 core:testasynccollectioncreation_shard1_0_replica2, [junit4] 2 core_node_name:core_node5, [junit4] 2 roles:null, [junit4] 2 base_url:http://127.0.0.1:38702;, [junit4] 2 node_name:127.0.0.1:38702_, [junit4] 2 numShards:1, [junit4] 2 state:active, [junit4] 2 shard:shard1_0, [junit4] 2 collection:testasynccollectioncreation, [junit4] 2 operation:state} [junit4] 2 390033 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ZkStateWriter going to update_collection /collections/testasynccollectioncreation/state.json version: 18 [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.u.UpdateLog Took 2 ms to seed version buckets with highest version 1503803851028824064 [junit4] 2 390033 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.c.RecoveryStrategy Finished recovery process. [junit4] 2 390033 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390034 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390034 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390298 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.h.a.CollectionsHandler Invoked Collection Action :requeststatus with paramsrequestid=1004action=REQUESTSTATUSwt=javabinversion=2 [junit4] 2 390299 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={requestid=1004action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.h.a.CoreAdminHandler Checking request status for : 10042599422118342586 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={qt=/admin/coresrequestid=10042599422118342586action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (OverseerThreadFactory-891-thread-4-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_] o.a.s.c.OverseerCollectionProcessor Asking sub shard leader to wait for: testasynccollectioncreation_shard1_0_replica2 to be alive on: 127.0.0.1:38702_ [junit4] 2 390349 INFO
[jira] [Closed] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar closed SOLR-6665. --- ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1, 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 5.2, Trunk Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7735) Look for solr.xml in Zookeeper by default
Jan Høydahl created SOLR-7735: - Summary: Look for solr.xml in Zookeeper by default Key: SOLR-7735 URL: https://issues.apache.org/jira/browse/SOLR-7735 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 5.2.1 Reporter: Jan Høydahl Fix For: 5.3 In SOLR-4718 support was added for reading {{solr.xml}} from ZK by providing system property {{solr.solrxml.location=zookeeper}}. If this property is not specified, solr.xml is read from SOLR_HOME. This issue is a spinoff from SOLR-7727 where start scripts do not support solr.xml in ZK. Instead of adding this to start scripts, suggest to simplify the whole logic: * If not in cloud mode, require {{solr.xml}} in SOLR_HOME as today * If in cloud mode, first look for {{solr.xml}} in ZK, but if not found, load from SOLR_HOME as today * Remove the need for {{solr.solrxml.location}} property -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7727) bin/solr script incorrectly requires solr.xml in $SOLR_HOME
[ https://issues.apache.org/jira/browse/SOLR-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607798#comment-14607798 ] Jan Høydahl commented on SOLR-7727: --- Created spinoff issue SOLR-7735 for this bin/solr script incorrectly requires solr.xml in $SOLR_HOME --- Key: SOLR-7727 URL: https://issues.apache.org/jira/browse/SOLR-7727 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.2.1 Reporter: Crawdaddy Assignee: Jan Høydahl Fix For: 5.3, Trunk Attachments: SOLR-7727.patch The bin/solr script currently requires solr.xml to exist in $SOLR_HOME, going against SOLR-4718 which allows it to be hosted in Zookeeper. Line 1219: if [ ! -e $SOLR_HOME/solr.xml ]; then echo -e \nSolr home directory $SOLR_HOME must contain a solr.xml file!\n exit 1 fi Commenting this out, and specifying the following in one's solr.in.sh, allows solr.xml to once again be hosted in ZK: SOLR_OPTS=$SOLR_OPTS -Dsolr.solrxml.location=zookeeper -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped
[ https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607868#comment-14607868 ] Adrien Grand commented on LUCENE-6639: -- I think there are two natural places where we can call policy.onUse(): Query.createWeight and Weight.scorer. None of them is perfect: if we put it in Query.createWeight, we will over-count how many times the query has been used if the weight is created yet never used, or under-count if the weight is used several times. And in Weight.scorer we under-count if no scorer is pulled on the first leaf as you noticed. Out of these two options, Weight.scorer is my favourite because it can never over-count, which I think it better given that caching too much is probably more harmful than not caching enough? We could try to do something more sophisticated but I'm not sure it would be worth the complexity? LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped Key: LUCENE-6639 URL: https://issues.apache.org/jira/browse/LUCENE-6639 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.3 Reporter: Terry Smith Priority: Minor Attachments: LUCENE-6639.patch The method {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}} starts with {code} if (context.ord == 0) { policy.onUse(getQuery()); } {code} which can result in a missed call for queries that return a null scorer for the first segment. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7673) Race condition in shard splitting waiting for sub-shard replicas to go into recovery
[ https://issues.apache.org/jira/browse/SOLR-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-7673: Attachment: SOLR-7673.patch Here's a patch which fixes both the race conditions. This patch creates the replicas in cluster state first (by submitting an addreplica command to the overseer queue), then it calls updateshardstate to set the sub-slices to 'recovery' state and then it invokes addreplica action to actually create the replica cores. A new flag called skipCreateReplicaInClusterState is added to the addreplica action so that OCP doesn't try to create the replica in cluster state again. This is a hack to avoid duplicating most of the logic of addreplica inside splitshard and I've put in comments to this effect. I did not want to perform potentially dangerous refactoring as part of this bug fix but once this is committed, we can open a separate issue just to refactor OCP. I've been beasting this test quite a bit and haven't been able to reproduce the failures. Race condition in shard splitting waiting for sub-shard replicas to go into recovery Key: SOLR-7673 URL: https://issues.apache.org/jira/browse/SOLR-7673 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.4, 5.2 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 5.3, Trunk Attachments: Lucene-Solr-Tests-trunk-Java8-69.log, SOLR-7673.patch I found this while digging into the failure on https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/69/ {code} [junit4] 2 390032 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ReplicaMutator Update state numShards=1 message={ [junit4] 2 core:testasynccollectioncreation_shard1_0_replica2, [junit4] 2 core_node_name:core_node5, [junit4] 2 roles:null, [junit4] 2 base_url:http://127.0.0.1:38702;, [junit4] 2 node_name:127.0.0.1:38702_, [junit4] 2 numShards:1, [junit4] 2 state:active, [junit4] 2 shard:shard1_0, [junit4] 2 collection:testasynccollectioncreation, [junit4] 2 operation:state} [junit4] 2 390033 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ZkStateWriter going to update_collection /collections/testasynccollectioncreation/state.json version: 18 [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.u.UpdateLog Took 2 ms to seed version buckets with highest version 1503803851028824064 [junit4] 2 390033 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.c.RecoveryStrategy Finished recovery process. [junit4] 2 390033 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390034 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390034 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390298 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.h.a.CollectionsHandler Invoked Collection Action :requeststatus with paramsrequestid=1004action=REQUESTSTATUSwt=javabinversion=2 [junit4] 2 390299 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={requestid=1004action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.h.a.CoreAdminHandler
[jira] [Resolved] (LUCENE-6636) Filtered Search Fails against toParentBlockJoinQuery
[ https://issues.apache.org/jira/browse/LUCENE-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6636. -- Resolution: Invalid This error is expected: you have child documents in the second block whose parent does not match the parent filter (it matches TestSchemeToo, but not TestScheme), so they look like orphans to Lucene. Instead, you should set parent:yes as a parent filter, and then create a boolean query that intersects your ToParentBlockJoinQuery with a term query on codingSchemeName:TestScheme. Filtered Search Fails against toParentBlockJoinQuery Key: LUCENE-6636 URL: https://issues.apache.org/jira/browse/LUCENE-6636 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.1, 5.2.1 Environment: OSX 10.9.5 Reporter: Scott Bauer I'm able to get this to fail against a slightly larger test case than what is run in your test suite. I noticed it first in 5.1 against a much larger index and created a more concise test case to reproduce it in both 5.1 and 5.2. I have similar code running in 4.10.4 that does not fail. I'm assuming this is a bug, but haven't identified the underlying issue in Lucene. I thought at one point this was related to the SOLR issue 7606 but this example seems to eliminate that possibility. Stack Trace: Exception in thread main java.lang.IllegalStateException: child query must only match non-parent docs, but parent docID=2147483647 matched childScorer=class org.apache.lucene.search.TermScorer at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:334) at org.apache.lucene.search.join.ToParentBlockJoinIndexSearcher.search(ToParentBlockJoinIndexSearcher.java:63) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485) at org.lexevs.lucene.prototype.BlockJoinTestQuery.run(BlockJoinTestQuery.java:53) at org.lexevs.lucene.prototype.BlockJoinTestQuery.main(BlockJoinTestQuery.java:71) Indexing class: package org.lexevs.lucene.prototype; import java.io.IOException; import java.nio.file.Path; import java.nio.file.Paths; import java.util.ArrayList; import java.util.List; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.analysis.util.CharArraySet; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.store.Directory; import org.apache.lucene.store.MMapDirectory; public class SmallTestIndexBuilder { public enum Code{ C1234,C23432,C4234,C2308, C8958; } public SmallTestIndexBuilder() { // TODO Auto-generated constructor stub } public void init(){ try { LuceneContentBuilder builder = new LuceneContentBuilder(); Path path = Paths.get(/Users/m029206/Desktop/index); Directory dir = new MMapDirectory(path); Analyzer analyzer=new StandardAnalyzer(new CharArraySet( 0, true)); IndexWriterConfig iwc= new IndexWriterConfig(analyzer); IndexWriter writer = new IndexWriter(dir, iwc); createCodingSchemeIndex(builder, writer ); writer.commit(); writer.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private void createCodingSchemeIndex(LuceneContentBuilder builder, IndexWriter writer) throws IOException { for(Code c :Code.values()){ ListDocument list = createBlockJoin(c.name()); writer.addDocuments(list); list = createBlockJoin2(c.name()); writer.addDocuments(list); } } private ListDocument createBlockJoin(String code) { ListDocument list = new ArrayListDocument(); Document doc = new Document(); doc.add(new org.apache.lucene.document.TextField(propertyValue, Blood, Field.Store.YES)); list.add(doc); doc = new Document(); doc.add(new org.apache.lucene.document.TextField(propertyValue, Mud, Field.Store.YES)); list.add(doc); doc = new Document(); doc.add(new org.apache.lucene.document.TextField(propertyValue, Suds, Field.Store.YES));
[jira] [Commented] (SOLR-7673) Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active
[ https://issues.apache.org/jira/browse/SOLR-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607967#comment-14607967 ] ASF subversion and git services commented on SOLR-7673: --- Commit 1688404 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688404 ] SOLR-7673: Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active --- Key: SOLR-7673 URL: https://issues.apache.org/jira/browse/SOLR-7673 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.4, 5.2 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 5.3, Trunk Attachments: Lucene-Solr-Tests-trunk-Java8-69.log, SOLR-7673.patch I found this while digging into the failure on https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/69/ {code} [junit4] 2 390032 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ReplicaMutator Update state numShards=1 message={ [junit4] 2 core:testasynccollectioncreation_shard1_0_replica2, [junit4] 2 core_node_name:core_node5, [junit4] 2 roles:null, [junit4] 2 base_url:http://127.0.0.1:38702;, [junit4] 2 node_name:127.0.0.1:38702_, [junit4] 2 numShards:1, [junit4] 2 state:active, [junit4] 2 shard:shard1_0, [junit4] 2 collection:testasynccollectioncreation, [junit4] 2 operation:state} [junit4] 2 390033 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ZkStateWriter going to update_collection /collections/testasynccollectioncreation/state.json version: 18 [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.u.UpdateLog Took 2 ms to seed version buckets with highest version 1503803851028824064 [junit4] 2 390033 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.c.RecoveryStrategy Finished recovery process. [junit4] 2 390033 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390034 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390034 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390298 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.h.a.CollectionsHandler Invoked Collection Action :requeststatus with paramsrequestid=1004action=REQUESTSTATUSwt=javabinversion=2 [junit4] 2 390299 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={requestid=1004action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.h.a.CoreAdminHandler Checking request status for : 10042599422118342586 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={qt=/admin/coresrequestid=10042599422118342586action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (OverseerThreadFactory-891-thread-4-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_] o.a.s.c.OverseerCollectionProcessor Asking sub shard leader to wait for:
[jira] [Resolved] (SOLR-7673) Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active
[ https://issues.apache.org/jira/browse/SOLR-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-7673. - Resolution: Fixed Race condition in shard splitting can cause operation to hang indefinitely or sub-shards to never become active --- Key: SOLR-7673 URL: https://issues.apache.org/jira/browse/SOLR-7673 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.4, 5.2 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 5.3, Trunk Attachments: Lucene-Solr-Tests-trunk-Java8-69.log, SOLR-7673.patch I found this while digging into the failure on https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/69/ {code} [junit4] 2 390032 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ReplicaMutator Update state numShards=1 message={ [junit4] 2 core:testasynccollectioncreation_shard1_0_replica2, [junit4] 2 core_node_name:core_node5, [junit4] 2 roles:null, [junit4] 2 base_url:http://127.0.0.1:38702;, [junit4] 2 node_name:127.0.0.1:38702_, [junit4] 2 numShards:1, [junit4] 2 state:active, [junit4] 2 shard:shard1_0, [junit4] 2 collection:testasynccollectioncreation, [junit4] 2 operation:state} [junit4] 2 390033 INFO (OverseerStateUpdate-93987739315601411-127.0.0.1:57926_-n_00) [n:127.0.0.1:57926_] o.a.s.c.o.ZkStateWriter going to update_collection /collections/testasynccollectioncreation/state.json version: 18 [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.u.UpdateLog Took 2 ms to seed version buckets with highest version 1503803851028824064 [junit4] 2 390033 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390033 INFO (RecoveryThread-testasynccollectioncreation_shard1_0_replica2) [n:127.0.0.1:38702_ c:testasynccollectioncreation s:shard1_0 r:core_node5 x:testasynccollectioncreation_shard1_0_replica2] o.a.s.c.RecoveryStrategy Finished recovery process. [junit4] 2 390033 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/testasynccollectioncreation/state.json for collection testasynccollectioncreation has occurred - updating... (live nodes size: 2) [junit4] 2 390034 INFO (zkCallback-167-thread-1-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390034 INFO (zkCallback-173-thread-1-processing-n:127.0.0.1:38702_) [n:127.0.0.1:38702_ ] o.a.s.c.c.ZkStateReader Updating data for testasynccollectioncreation to ver 19 [junit4] 2 390298 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.h.a.CollectionsHandler Invoked Collection Action :requeststatus with paramsrequestid=1004action=REQUESTSTATUSwt=javabinversion=2 [junit4] 2 390299 INFO (qtp1859109869-1664) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={requestid=1004action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.h.a.CoreAdminHandler Checking request status for : 10042599422118342586 [junit4] 2 390348 INFO (qtp1859109869-1665) [n:127.0.0.1:38702_] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={qt=/admin/coresrequestid=10042599422118342586action=REQUESTSTATUSwt=javabinversion=2} status=0 QTime=1 [junit4] 2 390348 INFO (OverseerThreadFactory-891-thread-4-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_] o.a.s.c.OverseerCollectionProcessor Asking sub shard leader to wait for: testasynccollectioncreation_shard1_0_replica2 to be alive on: 127.0.0.1:38702_ [junit4] 2 390349 INFO (OverseerThreadFactory-891-thread-4-processing-n:127.0.0.1:57926_) [n:127.0.0.1:57926_] o.a.s.c.OverseerCollectionProcessor Creating replica shard testasynccollectioncreation_shard1_1_replica2
[jira] [Resolved] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-6665. - Resolution: Fixed This fix has already been released so I opened SOLR-7736 to fix the test. ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1, 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.2 Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7736) Add a test for ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607973#comment-14607973 ] ASF subversion and git services commented on SOLR-7736: --- Commit 1688405 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1688405 ] SOLR-7736: Point to the correct jira issue for awaits fix Add a test for ZkController.publishAndWaitForDownStates --- Key: SOLR-7736 URL: https://issues.apache.org/jira/browse/SOLR-7736 Project: Solr Issue Type: Test Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 5.3, Trunk Add a test for ZkController.publishAndWaitForDownStates so that bugs like SOLR-6665 do not occur again. A test exists but it is not correct and currently disabled via AwaitsFix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[CI] Lucene 5x Linux 64 Test Only - Build # 53666 - Failure!
BUILD FAILURE Build URLhttp://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/53666/ Project:lucene_linux_java8_64_test_only Randomization: JDKEA9,network,heap[1024m],-server +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Date of build:Tue, 30 Jun 2015 10:30:40 +0200 Build duration:2 min 29 sec CHANGES No Changes BUILD ARTIFACTS checkout/lucene/build/core/test/temp/junit4-J0-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J1-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J2-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J3-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J4-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J5-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J6-20150630_103101_128.events checkout/lucene/build/core/test/temp/junit4-J7-20150630_103101_128.events FAILED JUNIT TESTS Name: org.apache.lucene.index Failed: 1 test(s), Passed: 787 test(s), Skipped: 23 test(s), Total: 811 test(s) Failed: org.apache.lucene.index.TestIndexWriterMerging.testForceMergeDeletes CONSOLE OUTPUT [...truncated 1788 lines...] [junit4] Tests with failures: [junit4] - org.apache.lucene.index.TestIndexWriterMerging.testForceMergeDeletes [junit4] [junit4] [junit4] JVM J0: 1.31 .. 122.99 = 121.69s [junit4] JVM J1: 1.11 .. 127.08 = 125.97s [junit4] JVM J2: 1.13 .. 123.78 = 122.65s [junit4] JVM J3: 1.11 .. 122.06 = 120.95s [junit4] JVM J4: 1.12 .. 123.38 = 122.26s [junit4] JVM J5: 1.12 .. 122.72 = 121.60s [junit4] JVM J6: 1.12 .. 122.63 = 121.51s [junit4] JVM J7: 1.31 .. 122.88 = 121.57s [junit4] Execution time total: 2 minutes 7 seconds [junit4] Tests summary: 402 suites, 3269 tests, 1 failure, 49 ignored (45 assumptions) BUILD FAILED /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50: The following error occurred while executing this line: /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444: The following error occurred while executing this line: /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:999: There were test failures: 402 suites, 3269 tests, 1 failure, 49 ignored (45 assumptions) Total time: 2 minutes 15 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results [description-setter] Description set: JDKEA9,network,heap[1024m],-server +UseSerialGC +UseCompressedOops +AggressiveOpts,sec manager on Email was triggered for: Failure - 1st Trigger Failure - Any was overridden by another trigger and will not send an email. Trigger Failure - Still was overridden by another trigger and will not send an email. Sending email for trigger: Failure - 1st - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6542) FSDirectory throws AccessControlException unless you grant write access to the index
[ https://issues.apache.org/jira/browse/LUCENE-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609324#comment-14609324 ] ASF subversion and git services commented on LUCENE-6542: - Commit 1688541 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688541 ] Merged revision(s) 1688537 from lucene/dev/trunk: LUCENE-6542: FSDirectory's ctor now works with security policies or file systems that restrict write access FSDirectory throws AccessControlException unless you grant write access to the index Key: LUCENE-6542 URL: https://issues.apache.org/jira/browse/LUCENE-6542 Project: Lucene - Core Issue Type: Bug Components: core/store Affects Versions: 5.1 Reporter: Trejkaz Assignee: Uwe Schindler Labels: regression Attachments: LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, patch.txt Hit this during my attempted upgrade to Lucene 5.1.0. (Yeah, I know 5.2.0 is out, and we'll be using that in production anyway, but the merge takes time.) Various tests of ours test Directory stuff against methods which the security policy won't allow tests to write to. Changes in FSDirectory mean that it now demands write access to the directory. 4.10.4 permitted read-only access. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6542) FSDirectory throws AccessControlException unless you grant write access to the index
[ https://issues.apache.org/jira/browse/LUCENE-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-6542. --- Resolution: Fixed Fix Version/s: Trunk 5.3 Committed and backported the lambdas to Java 7 in branch_5x. FSDirectory throws AccessControlException unless you grant write access to the index Key: LUCENE-6542 URL: https://issues.apache.org/jira/browse/LUCENE-6542 Project: Lucene - Core Issue Type: Bug Components: core/store Affects Versions: 5.1 Reporter: Trejkaz Assignee: Uwe Schindler Labels: regression Fix For: 5.3, Trunk Attachments: LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, patch.txt Hit this during my attempted upgrade to Lucene 5.1.0. (Yeah, I know 5.2.0 is out, and we'll be using that in production anyway, but the merge takes time.) Various tests of ours test Directory stuff against methods which the security policy won't allow tests to write to. Changes in FSDirectory mean that it now demands write access to the directory. 4.10.4 permitted read-only access. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7742) Support for Immutable ConfigSets
[ https://issues.apache.org/jira/browse/SOLR-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated SOLR-7742: - Attachment: SOLR-7742 Here's a patch that implements and tests Immutable ConfigSets. This seems useful by itself (in the case where you don't want the config to be modified), but could have more uses in the future with config templates and/or a ConfigSet API (that e.g. let you copy / instantiate [copy and unset immutable] ConfigSets). Note: One part I am unsure about is the best way to mark a ConfigSet as being immutable. The solution I chose was to have a file in the configDir named configSet.immutable. I chose that for three reasons: 1) It exists outside of the solrconfig / schema, which maps well to what the ConfigSet represents 2) It can be easily skipped as part of an instantiate operation as described above (where we copy a ConfigSet but turn off immutable) via a ConfigSet API call 3) It works for both local filesystem and ZK configurations (my original idea was to write some properties in the base directory for ZK data, but that wouldn't work for the local filesystem). Maybe this would be better: We have an optional file named configSet.properties that lets you specify (json?) properties that apply to the configset. Today, immutable would be the only one that has any meaning, but we could extend this in the future for other properties (e.g. you could have authorization information relevant to a ConfigSet API or specifications like shareable [can more than one collection use this as a base]?). The properties file also passes the three motivations listed above. Thoughts? Support for Immutable ConfigSets Key: SOLR-7742 URL: https://issues.apache.org/jira/browse/SOLR-7742 Project: Solr Issue Type: New Feature Reporter: Gregory Chanan Assignee: Gregory Chanan Attachments: SOLR-7742 See the discussion in SOLR-5955; to properly support collection templates that can be specified as the starting point for a collection configuration, we need support for ConfigSets that are immutable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5955) Add config templates to SolrCloud.
[ https://issues.apache.org/jira/browse/SOLR-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609427#comment-14609427 ] Gregory Chanan commented on SOLR-5955: -- I posted an initial patch for the immutable part of config sets in SOLR-7742. I think this jira is still relevant as a separate jira because it is tackling related issues (e.g. creating a collection in one step from a config template). Add config templates to SolrCloud. -- Key: SOLR-5955 URL: https://issues.apache.org/jira/browse/SOLR-5955 Project: Solr Issue Type: New Feature Reporter: Mark Miller Attachments: SOLR-5955.patch You should be able to upload config sets to a templates location and then specify a template as your starting config when creating new collections via REST API. We can have a default template that we ship with. This will let you create collections from scratch via REST API, and then you can use things like the schema REST API to customize the template config to your needs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6578) Geo3d: arcDistanceToShape() method may be useful
[ https://issues.apache.org/jira/browse/LUCENE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609456#comment-14609456 ] ASF subversion and git services commented on LUCENE-6578: - Commit 1688546 from [~dsmiley] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688546 ] LUCENE-6578: Geo3D: compute the distance from a point to a shape. Geo3d: arcDistanceToShape() method may be useful Key: LUCENE-6578 URL: https://issues.apache.org/jira/browse/LUCENE-6578 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6578-dws.patch, LUCENE-6578.patch, LUCENE-6578.patch, LUCENE-6578.patch I've got an application that seems like it may need the ability to compute a new kind of arc distance, from a GeoPoint to the nearest edge/point of a GeoShape. Adding this method to the interface, and corresponding implementations, would increase the utility of the package for ranking purposes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609503#comment-14609503 ] Karl Wright commented on LUCENE-6607: - As I understand it, the idea is to construct a field type that represents a geo3d point. In order to be able to use BKD to search for objects in geo3d shapes, you'd obviously need geo3d math. Nicholas has gone to some lengths to, in essence, create a minimal 2-D geo implementation sufficient to support GeoPoint field types and BKD. Mike has built the support intended for this. It seemed reasonable to do the same thing with geo3d as well. So either: (1) we build true geo types into core, or (2) we add geo field types to modules/spatial. I don't know if (2) is in fact a reasonable thing to do, however. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7742) Support for Immutable ConfigSets
Gregory Chanan created SOLR-7742: Summary: Support for Immutable ConfigSets Key: SOLR-7742 URL: https://issues.apache.org/jira/browse/SOLR-7742 Project: Solr Issue Type: New Feature Reporter: Gregory Chanan Assignee: Gregory Chanan See the discussion in SOLR-5955; to properly support collection templates that can be specified as the starting point for a collection configuration, we need support for ConfigSets that are immutable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6578) Geo3d: arcDistanceToShape() method may be useful
[ https://issues.apache.org/jira/browse/LUCENE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-6578. -- Resolution: Fixed Assignee: David Smiley Fix Version/s: 5.3 FYI during the 5x backport I had to add the isWithin(Vector) implementation to a couple of the classes which had depended on the Java 8 default method. No big deal. Thanks for this feature Karl! Geo3d: arcDistanceToShape() method may be useful Key: LUCENE-6578 URL: https://issues.apache.org/jira/browse/LUCENE-6578 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6578-dws.patch, LUCENE-6578.patch, LUCENE-6578.patch, LUCENE-6578.patch I've got an application that seems like it may need the ability to compute a new kind of arc distance, from a GeoPoint to the nearest edge/point of a GeoShape. Adding this method to the interface, and corresponding implementations, would increase the utility of the package for ranking purposes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6632) Geo3d: More accurate way of computing circle planes
[ https://issues.apache.org/jira/browse/LUCENE-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609474#comment-14609474 ] David Smiley commented on LUCENE-6632: -- FYI I have special alerts for any issue posted to modules/spatial, so your issues do not escape my attention :-) I looked at the patch. I'm curious; wouldn't upperPoint and lowerPoint as a pair always be farther away than either of them with center (for GeoCircle) or point (for GeoPath)? Geo3d: More accurate way of computing circle planes --- Key: LUCENE-6632 URL: https://issues.apache.org/jira/browse/LUCENE-6632 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6632.patch The Geo3d code that computes circle planes in GeoPath and GeoCircle is less accurate the smaller the circle. There's a better way of computing this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609516#comment-14609516 ] Karl Wright commented on LUCENE-6607: - It was never clear to me *why* a spatial feature would automatically be targeted at core. I have been assuming that adding field types with full query support, it's necessary to be in core. If this is not in fact true, then maybe there should be more careful consideration of where *all* of these spatial-type fields live. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609482#comment-14609482 ] Nicholas Knize commented on LUCENE-6607: Except it's intended that BKD and GeoPointField will (eventually) graduate to core to provide a dependency free geo option. Won't introducing a spatial4j dependency pose a problem at that point? Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609511#comment-14609511 ] Nicholas Knize commented on LUCENE-6607: Wait. Core is where GeoPointField started and it's intended target. To provide a native GeoPointField to the core/*.lucene.document package. It only moved to sandbox because of its immaturity and volatile API. If you look at the field type and supported queries it's no different than Numerics and NumericRange queries. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6542) FSDirectory throws AccessControlException unless you grant write access to the index
[ https://issues.apache.org/jira/browse/LUCENE-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609304#comment-14609304 ] ASF subversion and git services commented on LUCENE-6542: - Commit 1688537 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1688537 ] LUCENE-6542: FSDirectory's ctor now works with security policies or file systems that restrict write access FSDirectory throws AccessControlException unless you grant write access to the index Key: LUCENE-6542 URL: https://issues.apache.org/jira/browse/LUCENE-6542 Project: Lucene - Core Issue Type: Bug Components: core/store Affects Versions: 5.1 Reporter: Trejkaz Assignee: Uwe Schindler Labels: regression Attachments: LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, LUCENE-6542.patch, patch.txt Hit this during my attempted upgrade to Lucene 5.1.0. (Yeah, I know 5.2.0 is out, and we'll be using that in production anyway, but the merge takes time.) Various tests of ours test Directory stuff against methods which the security policy won't allow tests to write to. Changes in FSDirectory mean that it now demands write access to the directory. 4.10.4 permitted read-only access. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6578) Geo3d: arcDistanceToShape() method may be useful
[ https://issues.apache.org/jira/browse/LUCENE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609447#comment-14609447 ] ASF subversion and git services commented on LUCENE-6578: - Commit 1688545 from [~dsmiley] in branch 'dev/trunk' [ https://svn.apache.org/r1688545 ] LUCENE-6578: Geo3D: compute the distance from a point to a shape. Geo3d: arcDistanceToShape() method may be useful Key: LUCENE-6578 URL: https://issues.apache.org/jira/browse/LUCENE-6578 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6578-dws.patch, LUCENE-6578.patch, LUCENE-6578.patch, LUCENE-6578.patch I've got an application that seems like it may need the ability to compute a new kind of arc distance, from a GeoPoint to the nearest edge/point of a GeoShape. Adding this method to the interface, and corresponding implementations, would increase the utility of the package for ranking purposes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609469#comment-14609469 ] David Smiley commented on LUCENE-6607: -- Oh I definitely welcome BKD / GeoPointField using Geo3D. No qualms there! I don't believe there's an issue with module inter-dependencies -- at least Mike said so and I don't have a problem with that either from a policy perspective. Another possible option is to add a Spatial4j dependency on sandbox. If the sandbox is to get code from potentially any module, it stands to reason it should inherit their dependencies too. If users want to use only a piece of sandbox; it's on them to figure out what dependencies they _actually_ need. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609496#comment-14609496 ] David Smiley commented on LUCENE-6607: -- I'm not a fan of putting spatial in Lucene-core any more than I am a fan of putting a highlighter or suggester there (for example). I'd -1 that; sorry. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13099 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13099/ Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite Error Message: Determinizing automaton would result in more than 1 states. Stack Trace: org.apache.lucene.util.automaton.TooComplexToDeterminizeException: Determinizing automaton would result in more than 1 states. at __randomizedtesting.SeedInfo.seed([8AC5BB922C58A944:CD73DDAF93C8CC65]:0) at org.apache.lucene.util.automaton.Operations.determinize(Operations.java:743) at org.apache.lucene.util.automaton.Operations.complement(Operations.java:291) at org.apache.lucene.util.automaton.Operations.minus(Operations.java:315) at org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite(TestAutomaton.java:813) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1842 lines...] [junit4] Suite: org.apache.lucene.util.automaton.TestAutomaton [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestAutomaton -Dtests.method=testRandomFinite -Dtests.seed=8AC5BB922C58A944 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=sk_SK -Dtests.timezone=Asia/Dili -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR
[jira] [Comment Edited] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609503#comment-14609503 ] Karl Wright edited comment on LUCENE-6607 at 7/1/15 3:26 AM: - As I understand it, the idea is to construct a field type that represents a geo3d point. In order to be able to use BKD to search for objects in geo3d shapes, you'd obviously need geo3d math. Nicholas has gone to some lengths to, in essence, create a minimal 2-D geo implementation sufficient to support GeoPoint field types and BKD. Mike has built the support intended for this. It seemed reasonable to do the same thing with geo3d as well. So either: (1) we build true geo types into core, and accept modest dependencies to achieve that, or (2) we add geo field types to modules/spatial. I don't know if (2) is in fact a reasonable thing to do, however. was (Author: kwri...@metacarta.com): As I understand it, the idea is to construct a field type that represents a geo3d point. In order to be able to use BKD to search for objects in geo3d shapes, you'd obviously need geo3d math. Nicholas has gone to some lengths to, in essence, create a minimal 2-D geo implementation sufficient to support GeoPoint field types and BKD. Mike has built the support intended for this. It seemed reasonable to do the same thing with geo3d as well. So either: (1) we build true geo types into core, or (2) we add geo field types to modules/spatial. I don't know if (2) is in fact a reasonable thing to do, however. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609503#comment-14609503 ] Karl Wright edited comment on LUCENE-6607 at 7/1/15 3:26 AM: - As I understand it, the idea is to construct a field type that represents a geo3d point. In order to be able to use BKD to search for objects in geo3d shapes, you'd obviously need geo3d math. Nicholas has gone to some lengths to, in essence, create a minimal 2-D geo implementation sufficient to support GeoPoint field types and BKD. Mike has built the support intended for this. It seemed reasonable to do the same thing with geo3d as well. So either: (1) we build true geo types into core, and accept modest new core dependencies to achieve that, or (2) we add geo field types to modules/spatial. I don't know if (2) is in fact a reasonable thing to do, however. was (Author: kwri...@metacarta.com): As I understand it, the idea is to construct a field type that represents a geo3d point. In order to be able to use BKD to search for objects in geo3d shapes, you'd obviously need geo3d math. Nicholas has gone to some lengths to, in essence, create a minimal 2-D geo implementation sufficient to support GeoPoint field types and BKD. Mike has built the support intended for this. It seemed reasonable to do the same thing with geo3d as well. So either: (1) we build true geo types into core, and accept modest dependencies to achieve that, or (2) we add geo field types to modules/spatial. I don't know if (2) is in fact a reasonable thing to do, however. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6632) Geo3d: More accurate way of computing circle planes
[ https://issues.apache.org/jira/browse/LUCENE-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609494#comment-14609494 ] Karl Wright commented on LUCENE-6632: - We're interested in distance from the line going through the poles. upperPoint is just a certain arc distance from the center, going along a plane that represents a longitudinal slice. lowerPoint goes the opposite direction. You really can't tell which is farther from the line going through the poles until you check. Geo3d: More accurate way of computing circle planes --- Key: LUCENE-6632 URL: https://issues.apache.org/jira/browse/LUCENE-6632 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6632.patch The Geo3d code that computes circle planes in GeoPath and GeoCircle is less accurate the smaller the circle. There's a better way of computing this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6607) Move geo3d to Lucene's sandbox module
[ https://issues.apache.org/jira/browse/LUCENE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609528#comment-14609528 ] Nicholas Knize commented on LUCENE-6607: Not sure I understand what you mean by automatically? Did you mean deliberately? It was intended to provide a dependency free, lightweight, GeoPointField type for the 90% use-case where people want simple geopoint search using nothing other than the Core library. More advanced GIS'ish capabilities? You'd bring on the spatial module, spatial4j, jts, ejml, sis, etc. it's really that simple. Not sure what the problem is having a simple geo API in core? Doesn't it just make core that much more applicable to different application use cases? It's not a highlighter, not a suggester, it's a data type. BKD is another simple type, but requires a different codec. I really want to make both of these 3D capable. Unless I'm mistaken core is to have no dependencies? If I'm wrong then this issue is moot anyway. Move geo3d to Lucene's sandbox module - Key: LUCENE-6607 URL: https://issues.apache.org/jira/browse/LUCENE-6607 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Geo3d is a powerful low-level geo API, recording places on the earth's surface in the index in three dimensions (as 3 separate numbers) and offering fast shape intersection/distance testing at search time. [~daddywri] originally contributed this in LUCENE-6196, and we put it in spatial module, but I think a more natural place for it, for now anyway, is Lucene's sandbox module: it's very new, its APIs/abstractions are very much in flux (and the higher standards for abstractions in the spatial module cause disagreements: LUCENE-6578), [~daddywri] and others could iterate faster on changes in sandbox, etc. This would also un-block issues like LUCENE-6480, allowing GeoPointField and BKD trees to also use geo3d. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6643) Remove dependency of lucene/grouping on oal.search.Filter
Adrien Grand created LUCENE-6643: Summary: Remove dependency of lucene/grouping on oal.search.Filter Key: LUCENE-6643 URL: https://issues.apache.org/jira/browse/LUCENE-6643 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Fix For: 5.3 Given that Filter is on the way out, we should use the Query API instead of Filter in the grouping module. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7707) Add StreamExpression Support to RollupStream
[ https://issues.apache.org/jira/browse/SOLR-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dennis Gove updated SOLR-7707: -- Attachment: SOLR-7707.patch Adds expression support to RollupStream. Note: I have added a ParallelRollupStream test but I cannot get it to pass. I feel as though I've forgotten a required change to make it work with ParallelStream. Add StreamExpression Support to RollupStream Key: SOLR-7707 URL: https://issues.apache.org/jira/browse/SOLR-7707 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Attachments: SOLR-7707.patch This ticket is to add Stream Expression support to the RollupStream as discussed in SOLR-7560. Proposed expression syntax for the RollupStream (copied from that ticket) {code} rollup( someStream(), over=fieldA, fieldB, fieldC, min(fieldA), max(fieldA), min(fieldB), mean(fieldD), sum(fieldC) ) {code} This requires making the *Metric types Expressible but I think that ends up as a good thing. Would make it real easy to support other options on metrics like excluding outliers, for example find the sum of values within 3 standard deviations from the mean could be {code} sum(fieldC, limit=standardDev(3)) {code} (note, how that particular calculation could be implemented is left as an exercise for the reader, I'm just using it as an example of adding additional options on a relatively simple metric). Another option example is what to do with null values. For example, in some cases a null should not impact a mean but in others it should. You could express those as {code} mean(fieldA, replace(null, 0)) // replace null values with 0 thus leading to an impact on the mean mean(fieldA, includeNull=true) // nulls are counted in the denominator but nothing added to numerator mean(fieldA, includeNull=false) // nulls neither counted in denominator nor added to numerator mean(fieldA, replace(null, fieldB), includeNull=true) // if fieldA is null replace it with fieldB, include null fieldB in mean {code} so on and so forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7737) Query form not highlighting/displaying error responses
[ https://issues.apache.org/jira/browse/SOLR-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7737: Fix Version/s: (was: 5.2.1) 5.3 Query form not highlighting/displaying error responses -- Key: SOLR-7737 URL: https://issues.apache.org/jira/browse/SOLR-7737 Project: Solr Issue Type: Bug Reporter: Theis Assignee: Upayavira Priority: Trivial Fix For: 5.3 Can be reproduced by searching in a field that does not exist, e.g. q=nonexistingfield:foo - Current UI; The error-response is not highlighted, its all black. - New UI (index.html): The error-response is not displayed at all. Also: index.html does not execute query when hitting Enter on keyboard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6643) Remove dependency of lucene/grouping on oal.search.Filter
[ https://issues.apache.org/jira/browse/LUCENE-6643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6643: - Attachment: LUCENE-6643.patch Here is a patch. Remove dependency of lucene/grouping on oal.search.Filter - Key: LUCENE-6643 URL: https://issues.apache.org/jira/browse/LUCENE-6643 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Fix For: 5.3 Attachments: LUCENE-6643.patch Given that Filter is on the way out, we should use the Query API instead of Filter in the grouping module. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 133 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/133/ 1 tests failed. REGRESSION: org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete Error Message: Got a soft commit we weren't expecting Stack Trace: java.lang.AssertionError: Got a soft commit we weren't expecting at __randomizedtesting.SeedInfo.seed([B3689BE1A1F7FC89:7424237CBA5F3139]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:281) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 9498 lines...] [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest [junit4] 2 Creating dataDir:
[jira] [Comment Edited] (LUCENE-6641) Idea CodeSyle should be enriched
[ https://issues.apache.org/jira/browse/LUCENE-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607971#comment-14607971 ] Alessandro Benedetti edited comment on LUCENE-6641 at 6/30/15 9:21 AM: --- I found the Eclipse CodeStyle. Apparently spacing related it wants : - private int test(int a,int b) - no spaces in the declaration - expanded.equals(n) - no spaces in invocation And I find a lot of cases that satisfy this scenario. Actually that seems the standard. If we agree that, probably the reason it was not in the codeStyle.xml is that Intellij Idea put it by default ! My assumption that something was wrong is because the class in SolrJ I have been working with is not following the standard, hence my confusion :) was (Author: alessandro.benedetti): I found the Eclipse CodeStyle. Apparently spacing related ti wants : - private int test(int a,int b) - no spaces in the declaration - expanded.equals(n) - no spaces in invocation And I find a lot of cases that satisfy this scenario. If we agree that, probably the reason it was not in the codeStyle.xml is that Intellij Idea put it by default ! Idea CodeSyle should be enriched Key: LUCENE-6641 URL: https://issues.apache.org/jira/browse/LUCENE-6641 Project: Lucene - Core Issue Type: Improvement Components: -tools Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Labels: codestyle Attachments: Eclipse-Lucene-Codestyle.xml Currently Idea CodeStyle has been fixed for latest intelljIdea version but it is not complete. For example it does not contain spaces management ( space within method params, space between operators ext )for the Java language ( and maybe for the other languages involved as well ) . We should define a complete standard ( as some inconsistencies are in the committed code as well) . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6633) Remove DuplicateFilter
[ https://issues.apache.org/jira/browse/LUCENE-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608010#comment-14608010 ] ASF subversion and git services commented on LUCENE-6633: - Commit 1688410 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688410 ] LUCENE-6633: Deprecate DuplicateFilter. Remove DuplicateFilter -- Key: LUCENE-6633 URL: https://issues.apache.org/jira/browse/LUCENE-6633 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 5.3 Attachments: LUCENE-6633.patch DuplicateFilter is a filter which aims at only exposing a single document per unique field value. It has some flaws, for instance it only works if you have a single leaf and we now have a better way to handle this use-case with DiversifiedTopDocsCollector. So I suggest that we just remove it and recommend users to use DiversifiedTopDocsCollector instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6633) Remove DuplicateFilter
[ https://issues.apache.org/jira/browse/LUCENE-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6633. -- Resolution: Fixed Remove DuplicateFilter -- Key: LUCENE-6633 URL: https://issues.apache.org/jira/browse/LUCENE-6633 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 5.3 Attachments: LUCENE-6633.patch DuplicateFilter is a filter which aims at only exposing a single document per unique field value. It has some flaws, for instance it only works if you have a single leaf and we now have a better way to handle this use-case with DiversifiedTopDocsCollector. So I suggest that we just remove it and recommend users to use DiversifiedTopDocsCollector instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6617) Reduce FST's ram usage
[ https://issues.apache.org/jira/browse/LUCENE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6617. Resolution: Fixed Reduce FST's ram usage -- Key: LUCENE-6617 URL: https://issues.apache.org/jira/browse/LUCENE-6617 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Attachments: LUCENE-6617.patch Spinoff from LUCENE-6199, pulling out just the FST RAM reduction changes. The FST data structure tries to be a RAM efficient representation of a sorted map, but there are a few things I think we can do to trim it even more: * Don't store arc and node count: this is available from the Builder if you really want to do something with it. * Don't use the paged byte store unless the FST is huge; just use a single byte[] * Some members like lastFrozenNode, reusedBytesPerArc, allowArrayArcs are only used during building, so we should move them to the Builder * We don't need to cache NO_OUTPUT: we can ask the Outputs impl for it -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6617) Reduce FST's ram usage
[ https://issues.apache.org/jira/browse/LUCENE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608067#comment-14608067 ] ASF subversion and git services commented on LUCENE-6617: - Commit 1688419 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688419 ] LUCENE-6617: reduce heap usage for small FSTs Reduce FST's ram usage -- Key: LUCENE-6617 URL: https://issues.apache.org/jira/browse/LUCENE-6617 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Attachments: LUCENE-6617.patch Spinoff from LUCENE-6199, pulling out just the FST RAM reduction changes. The FST data structure tries to be a RAM efficient representation of a sorted map, but there are a few things I think we can do to trim it even more: * Don't store arc and node count: this is available from the Builder if you really want to do something with it. * Don't use the paged byte store unless the FST is huge; just use a single byte[] * Some members like lastFrozenNode, reusedBytesPerArc, allowArrayArcs are only used during building, so we should move them to the Builder * We don't need to cache NO_OUTPUT: we can ask the Outputs impl for it -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7737) Query form not highlighting/displaying error responses
[ https://issues.apache.org/jira/browse/SOLR-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Theis updated SOLR-7737: Description: Can be reproduced by searching in a field that does not exist, e.g. q=nonexistingfield:foo - Current UI; The error-response is not highlighted, its all black. - New UI (index.html): The error-response is not displayed at all. Also: index.html does not execute query when hitting Enter on keyboard. was: Can be reproduced by searching in a field that does not exist, e.g. q=nonexistingfield:foo - Current UI; The error-response is not highlighted, its all black. - New UI (index.html): The error-response is not displayed at all. Query form not highlighting/displaying error responses -- Key: SOLR-7737 URL: https://issues.apache.org/jira/browse/SOLR-7737 Project: Solr Issue Type: Bug Reporter: Theis Priority: Trivial Fix For: 5.2.1 Can be reproduced by searching in a field that does not exist, e.g. q=nonexistingfield:foo - Current UI; The error-response is not highlighted, its all black. - New UI (index.html): The error-response is not displayed at all. Also: index.html does not execute query when hitting Enter on keyboard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped
[ https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608281#comment-14608281 ] Terry Smith commented on LUCENE-6639: - This doesn't seem pressing but irked me enough to submit a ticket. It feels that we should be able to be more correct but the current API isn't very supportive of that work flow. I slightly prefer calling onUse() from createWeight() as it does make this edge case of the first segment go away which I feel is harder to reason about than someone creating a weight and not using it. The improved multi-threaded search code in IndexSearcher is a great example of this misbehaving where there is no guarantee that the first segment's Weight.scorer() will be called before the other segments. However I'm not familiar with use cases that use Query.createWeight() without executing some kind of search or explain to know if they are more of an issue. Is adding bookend methods to more correctly detect the begin/end of the search phase seen as too messy and special casey? At the end of the day I also wonder if it's worth the complexity but wanted to open this ticket to bootstrap the discussion as this could be a hard problem to diagnose in the future (someone wants to know why their query isn't getting cached and it's due to some obscure detail like this). LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped Key: LUCENE-6639 URL: https://issues.apache.org/jira/browse/LUCENE-6639 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.3 Reporter: Terry Smith Priority: Minor Attachments: LUCENE-6639.patch The method {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}} starts with {code} if (context.ord == 0) { policy.onUse(getQuery()); } {code} which can result in a missed call for queries that return a null scorer for the first segment. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7719) Suggester Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608284#comment-14608284 ] Alessandro Benedetti commented on SOLR-7719: I will provide today a clean patch, taking care of the style. Cheers Suggester Component results parsing --- Key: SOLR-7719 URL: https://issues.apache.org/jira/browse/SOLR-7719 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Tommaso Teofili Priority: Minor Labels: queryResponse, suggester, suggestions Attachments: SOLR-7719.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing suggestions coming from the Suggest Component . It would be nice to have the suggestions properly managed and returned with simply getter methods. Current Json : lst name=suggest lst name=dictionary1 lst name=queryTerm int name=numFound2/int arr name=suggestions lst str name=termsuggestion1/str.. str name=termsuggestion2/str… /lst /arr /lst /lst.. This will be parsed accordingly . Producing an easy to use Java Map. Dictionary2suggestions -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6617) Reduce FST's ram usage
[ https://issues.apache.org/jira/browse/LUCENE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608037#comment-14608037 ] ASF subversion and git services commented on LUCENE-6617: - Commit 1688412 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1688412 ] LUCENE-6617: reduce heap usage for small FSTs Reduce FST's ram usage -- Key: LUCENE-6617 URL: https://issues.apache.org/jira/browse/LUCENE-6617 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Attachments: LUCENE-6617.patch Spinoff from LUCENE-6199, pulling out just the FST RAM reduction changes. The FST data structure tries to be a RAM efficient representation of a sorted map, but there are a few things I think we can do to trim it even more: * Don't store arc and node count: this is available from the Builder if you really want to do something with it. * Don't use the paged byte store unless the FST is huge; just use a single byte[] * Some members like lastFrozenNode, reusedBytesPerArc, allowArrayArcs are only used during building, so we should move them to the Builder * We don't need to cache NO_OUTPUT: we can ask the Outputs impl for it -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7737) Query form not highlighting/displaying error responses
Theis created SOLR-7737: --- Summary: Query form not highlighting/displaying error responses Key: SOLR-7737 URL: https://issues.apache.org/jira/browse/SOLR-7737 Project: Solr Issue Type: Bug Reporter: Theis Priority: Trivial Fix For: 5.2.1 Can be reproduced by searching in a field that does not exist, e.g. q=nonexistingfield:foo - Current UI; The error-response is not highlighted, its all black. - New UI (index.html): The error-response is not displayed at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6631) Lucene Document Classification
[ https://issues.apache.org/jira/browse/LUCENE-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated LUCENE-6631: - Attachment: LUCENE-6631.patch Introduced : - field boosting for knnClassifier ( text) - module for Document Classification including -knnDocumentClassifier -SimpleNaivesDocumentClassifier Lucene Document Classification -- Key: LUCENE-6631 URL: https://issues.apache.org/jira/browse/LUCENE-6631 Project: Lucene - Core Issue Type: Improvement Components: modules/classification Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Labels: classification Attachments: LUCENE-6631.patch Currently the Lucene Classification module supports the classification for an input text using the Lucene index as a trained model. This improvement is adding to the module a set of components to provide Document classification ( where the Document is a Lucene document ). All selected fields from the Document will have their part in the classification ( including the use of the proper Analyzer per field). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7707) Add StreamExpression Support to RollupStream
[ https://issues.apache.org/jira/browse/SOLR-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608288#comment-14608288 ] Joel Bernstein commented on SOLR-7707: -- Ok, thanks Dennis. I'll take a look at the ParallelStream test. Add StreamExpression Support to RollupStream Key: SOLR-7707 URL: https://issues.apache.org/jira/browse/SOLR-7707 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Attachments: SOLR-7707.patch This ticket is to add Stream Expression support to the RollupStream as discussed in SOLR-7560. Proposed expression syntax for the RollupStream (copied from that ticket) {code} rollup( someStream(), over=fieldA, fieldB, fieldC, min(fieldA), max(fieldA), min(fieldB), mean(fieldD), sum(fieldC) ) {code} This requires making the *Metric types Expressible but I think that ends up as a good thing. Would make it real easy to support other options on metrics like excluding outliers, for example find the sum of values within 3 standard deviations from the mean could be {code} sum(fieldC, limit=standardDev(3)) {code} (note, how that particular calculation could be implemented is left as an exercise for the reader, I'm just using it as an example of adding additional options on a relatively simple metric). Another option example is what to do with null values. For example, in some cases a null should not impact a mean but in others it should. You could express those as {code} mean(fieldA, replace(null, 0)) // replace null values with 0 thus leading to an impact on the mean mean(fieldA, includeNull=true) // nulls are counted in the denominator but nothing added to numerator mean(fieldA, includeNull=false) // nulls neither counted in denominator nor added to numerator mean(fieldA, replace(null, fieldB), includeNull=true) // if fieldA is null replace it with fieldB, include null fieldB in mean {code} so on and so forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7719) Suggester Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated SOLR-7719: --- Attachment: SOLR-7719.patch Code Style fixed, patch ready Suggester Component results parsing --- Key: SOLR-7719 URL: https://issues.apache.org/jira/browse/SOLR-7719 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Tommaso Teofili Priority: Minor Labels: queryResponse, suggester, suggestions Attachments: SOLR-7719.patch, SOLR-7719.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing suggestions coming from the Suggest Component . It would be nice to have the suggestions properly managed and returned with simply getter methods. Current Json : lst name=suggest lst name=dictionary1 lst name=queryTerm int name=numFound2/int arr name=suggestions lst str name=termsuggestion1/str.. str name=termsuggestion2/str… /lst /arr /lst /lst.. This will be parsed accordingly . Producing an easy to use Java Map. Dictionary2suggestions -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-2242) Get distinct count of names for a facet field
[ https://issues.apache.org/jira/browse/SOLR-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-2242. - Resolution: Duplicate Marking as duplicate of SOLR-6968 Get distinct count of names for a facet field - Key: SOLR-2242 URL: https://issues.apache.org/jira/browse/SOLR-2242 Project: Solr Issue Type: New Feature Components: Response Writers Affects Versions: 4.0-ALPHA Reporter: Bill Bell Priority: Minor Fix For: Trunk, 4.9 Attachments: SOLR-2242-3x.patch, SOLR-2242-3x_5_tests.patch, SOLR-2242-solr40-3.patch, SOLR-2242.patch, SOLR-2242.patch, SOLR-2242.patch, SOLR-2242.shard.withtests.patch, SOLR-2242.solr3.1-fix.patch, SOLR-2242.solr3.1.patch, SOLR.2242.solr3.1.patch When returning facet.field=name of field you will get a list of matches for distinct values. This is normal behavior. This patch tells you how many distinct values you have (# of rows). Use with limit=-1 and mincount=1. The feature is called namedistinct. Here is an example: Parameters: facet.numTerms or f.field.facet.numTerms = true (default is false) - turn on distinct counting of terms facet.field - the field to count the terms It creates a new section in the facet section... http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solrindent=trueq=*:*facet=truefacet.mincount=1facet.numTerms=truefacet.limit=-1facet.field=price http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solrindent=trueq=*:*facet=truefacet.mincount=1facet.numTerms=falsefacet.limit=-1facet.field=price http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solrindent=trueq=*:*facet=truefacet.mincount=1facet.numTerms=truefacet.limit=-1facet.field=price This currently only works on facet.field. {code} lst name=facet_counts lst name=facet_queries/ lst name=facet_fields.../lst lst name=facet_numTerms lst name=localhost:8983/solr/ int name=price14/int /lst lst name=localhost:8080/solr/ int name=price14/int /lst /lst lst name=facet_dates/ lst name=facet_ranges/ /lst OR with no sharding- lst name=facet_numTerms int name=price14/int /lst {code} Several people use this to get the group.field count (the # of groups). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs
[ https://issues.apache.org/jira/browse/LUCENE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608407#comment-14608407 ] Sanne Grinovero commented on LUCENE-6212: - Hi Adrien, thanks for replying! Yes I agree with you that _in general_ this could be abused and I understand the caveats, still I would like to do it. Since Lucene is a library for developers and it's not an end user product I would prefer it could give me a bit more flexibility. Remove IndexWriter's per-document analyzer add/updateDocument APIs -- Key: LUCENE-6212 URL: https://issues.apache.org/jira/browse/LUCENE-6212 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, 5.1, Trunk Attachments: LUCENE-6212.patch IndexWriter already takes an analyzer up-front (via IndexWriterConfig), but it also allows you to specify a different one for each add/updateDocument. I think this is quite dangerous/trappy since it means you can easily index tokens for that document that don't match at search-time based on the search-time analyzer. I think we should remove this trap in 5.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7724) Clustering Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608337#comment-14608337 ] Dawid Weiss commented on SOLR-7724: --- I reviewed the patch and I looked at the affected classes. I understand your confusion with regard to formatting -- those existing classes are terribly formatted. We should fix this incrementally though, so at least new code should adhere to the standard (as Mike very eloquently explained). I'll commit it in, thanks Alessandro. Clustering Component results parsing Key: SOLR-7724 URL: https://issues.apache.org/jira/browse/SOLR-7724 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Dawid Weiss Priority: Minor Labels: clustering, clusters, queryResponse Attachments: SOLR-7724.patch, SOLR-7724.patch, SOLR-7724.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing clusters coming from the Clustering Component . It would be nice to have the clusters properly managed and returned with simply getter methods. Current Json : clusters:[{ labels:[label1], score:14.067292538482793, docs:[id1, id2, id3] }, { labels:[label2], score:16.932201244715046, docs:[id1, id2, id3]}]} This will be parsed accordingly . Producing an easy to use Java List. Clusters -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6644) Symlinked checkout folder causes access denied exceptions
Dawid Weiss created LUCENE-6644: --- Summary: Symlinked checkout folder causes access denied exceptions Key: LUCENE-6644 URL: https://issues.apache.org/jira/browse/LUCENE-6644 Project: Lucene - Core Issue Type: Improvement Components: general/test Reporter: Dawid Weiss Assignee: Dawid Weiss Priority: Minor Had to make some space on my drive and moved certain repositories to a different volume (via windows junction). This causes exceptions from the security manager if the tests are run from the original location (which resolves to a different path). Don't have any thoughts about this (whether it should be fixed or how), just wanted to make note it's the case. {code} [junit4] Throwable #1: java.security.AccessControlException: access denied (java.io.FilePermission D:\Work\lu cene\trunk\solr\build\solr-solrj\test\J1\temp\solr.common.util.TestJavaBinCodec_25A5FF5CB51DB333-001 write) [junit4]at __randomizedtesting.SeedInfo.seed([25A5FF5CB51DB333]:0) [junit4]at java.security.AccessControlContext.checkPermission(AccessControlContext.java:457) [junit4]at java.security.AccessController.checkPermission(AccessController.java:884) [junit4]at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) [junit4]at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) [junit4]at sun.nio.fs.WindowsPath.checkWrite(WindowsPath.java:799) [junit4]at sun.nio.fs.WindowsFileSystemProvider.createDirectory(WindowsFileSystemProvider.java:491) [junit4]at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider. java:133) [junit4]at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider. java:133) [junit4]at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider. java:133) [junit4]at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider. java:133) [junit4]at java.nio.file.Files.createDirectory(Files.java:674) [junit4]at org.apache.lucene.util.LuceneTestCase.createTempDir(LuceneTestCase.java:2584) [junit4]at org.apache.solr.SolrTestCaseJ4.beforeClass(SolrTestCaseJ4.java:201) [junit4]at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7724) Clustering Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608371#comment-14608371 ] ASF subversion and git services commented on SOLR-7724: --- Commit 1688455 from [~dawidweiss] in branch 'dev/trunk' [ https://svn.apache.org/r1688455 ] SOLR-7724: SolrJ now supports parsing the output of the clustering component. (Alessandro Benedetti via Dawid Weiss) Clustering Component results parsing Key: SOLR-7724 URL: https://issues.apache.org/jira/browse/SOLR-7724 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Dawid Weiss Priority: Minor Labels: clustering, clusters, queryResponse Attachments: SOLR-7724.patch, SOLR-7724.patch, SOLR-7724.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing clusters coming from the Clustering Component . It would be nice to have the clusters properly managed and returned with simply getter methods. Current Json : clusters:[{ labels:[label1], score:14.067292538482793, docs:[id1, id2, id3] }, { labels:[label2], score:16.932201244715046, docs:[id1, id2, id3]}]} This will be parsed accordingly . Producing an easy to use Java List. Clusters -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7724) Clustering Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608352#comment-14608352 ] Alessandro Benedetti commented on SOLR-7724: Thanks Dawid, Now I have much more clear the current official CodeStyle. Discussing in the appropriate Lucene issue I think my confusion derived from the classes I have been working with. Now for my contribution I am using the official Idea code style, and formatting only my changed code. The patch produced are much cleaner :) Thanks for the commit ! Feel free to close the issue as soon as you do it ! I have a parallel issue, I think Tommaso is taking care of it : https://issues.apache.org/jira/browse/SOLR-7719 It is pretty much the same but with the Suggester component! Clustering Component results parsing Key: SOLR-7724 URL: https://issues.apache.org/jira/browse/SOLR-7724 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Dawid Weiss Priority: Minor Labels: clustering, clusters, queryResponse Attachments: SOLR-7724.patch, SOLR-7724.patch, SOLR-7724.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing clusters coming from the Clustering Component . It would be nice to have the clusters properly managed and returned with simply getter methods. Current Json : clusters:[{ labels:[label1], score:14.067292538482793, docs:[id1, id2, id3] }, { labels:[label2], score:16.932201244715046, docs:[id1, id2, id3]}]} This will be parsed accordingly . Producing an easy to use Java List. Clusters -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs
[ https://issues.apache.org/jira/browse/LUCENE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608354#comment-14608354 ] Adrien Grand commented on LUCENE-6212: -- bq. There are perfectly valid use cases to use a different Analyzer at query time rather than indexing time This change doesn't force you to use the same analyzer at index time and search time, just to always use the same analyzer at index time. bq. it's also possible to have text of different sources which has been pre-processed in different ways, so needs to be tokenized differently to get a consistent output One way that this feature was misused was to handle multi-lingual content, but this would break term statistics as different words could be filtered to the same stem and a single word could be filtered to two different stems depending on the language. In general, if different analysis chains are required, it's better to just use different fields or even different indices. Remove IndexWriter's per-document analyzer add/updateDocument APIs -- Key: LUCENE-6212 URL: https://issues.apache.org/jira/browse/LUCENE-6212 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, 5.1, Trunk Attachments: LUCENE-6212.patch IndexWriter already takes an analyzer up-front (via IndexWriterConfig), but it also allows you to specify a different one for each add/updateDocument. I think this is quite dangerous/trappy since it means you can easily index tokens for that document that don't match at search-time based on the search-time analyzer. I think we should remove this trap in 5.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-7724) Clustering Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss resolved SOLR-7724. --- Resolution: Fixed Fix Version/s: Trunk 5.3 Clustering Component results parsing Key: SOLR-7724 URL: https://issues.apache.org/jira/browse/SOLR-7724 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Dawid Weiss Priority: Minor Labels: clustering, clusters, queryResponse Fix For: 5.3, Trunk Attachments: SOLR-7724.patch, SOLR-7724.patch, SOLR-7724.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing clusters coming from the Clustering Component . It would be nice to have the clusters properly managed and returned with simply getter methods. Current Json : clusters:[{ labels:[label1], score:14.067292538482793, docs:[id1, id2, id3] }, { labels:[label2], score:16.932201244715046, docs:[id1, id2, id3]}]} This will be parsed accordingly . Producing an easy to use Java List. Clusters -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)
[ https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608334#comment-14608334 ] Philip Willoughby commented on SOLR-247: We have a concrete use-case for which this facility is required. We have a requirement to add arbitrary tags in arbitrary groups to products, and to be able to filter by those tags in the same way as you can filter our documents by more-structured attributes (e.g. price, discount, size, designer, etc). The semantics we want are to ignore the filter on property X when faceting property X. With our known-in-advance fields this is easy: taking the example of designers we add an fq={!tag=did}designer_id:## for filtering and add facet.field={!ex=did}designer_id when looking for designer facets. With these unknown-in-advance fields it is hard: what we had hoped to do was use facet.field=arbitrary_tag_* to generate the tag group facets and then if someone filters to group X=Y we'd add fq={!tag=atX}arbitrary_tag_X:Y for the filter and pass facet.field={!ex=atX}arbitrary_tag_X to get the facets. Of course in this case we would also want to pass facet.field=arbitrary_tag_* to get the facets over the other tags which means faceting arbitrary_tag_X twice, and creates a precedence problem. We want, I think, facet.field=arbitrary_tag_* to work, but to be disregarded for any field it would otherwise match which is explicitly named as a facet.field The other model we have considered is to combine every group and tag into a string like group\u001Ftag, put them all into a field named tags and facet over that. But this means that we can't disregard the filters over group X when faceting group X while respecting them while faceting group Y etc without making multiple queries. Allow facet.field=* to facet on all fields (without knowing what they are) -- Key: SOLR-247 URL: https://issues.apache.org/jira/browse/SOLR-247 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Priority: Minor Labels: beginners, newdev Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, SOLR-247.patch, SOLR-247.patch I don't know if this is a good idea to include -- it is potentially a bad idea to use it, but that can be ok. This came out of trying to use faceting for the LukeRequestHandler top term collecting. http://www.nabble.com/Luke-request-handler-issue-tf3762155.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7724) Clustering Component results parsing
[ https://issues.apache.org/jira/browse/SOLR-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608383#comment-14608383 ] ASF subversion and git services commented on SOLR-7724: --- Commit 1688458 from [~dawidweiss] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688458 ] SOLR-7724: SolrJ now supports parsing the output of the clustering component. (Alessandro Benedetti via Dawid Weiss) Clustering Component results parsing Key: SOLR-7724 URL: https://issues.apache.org/jira/browse/SOLR-7724 Project: Solr Issue Type: Improvement Components: SolrJ Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Assignee: Dawid Weiss Priority: Minor Labels: clustering, clusters, queryResponse Attachments: SOLR-7724.patch, SOLR-7724.patch, SOLR-7724.patch Original Estimate: 24h Remaining Estimate: 24h Currently SolrJ org.apache.solr.client.solrj.response.QueryResponse is not managing clusters coming from the Clustering Component . It would be nice to have the clusters properly managed and returned with simply getter methods. Current Json : clusters:[{ labels:[label1], score:14.067292538482793, docs:[id1, id2, id3] }, { labels:[label2], score:16.932201244715046, docs:[id1, id2, id3]}]} This will be parsed accordingly . Producing an easy to use Java List. Clusters -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7441) Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL
[ https://issues.apache.org/jira/browse/SOLR-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608428#comment-14608428 ] Joel Bernstein edited comment on SOLR-7441 at 6/30/15 3:03 PM: --- Added a patch that adds an Exception handling framework. The new ExceptionStream wraps a TupleStream and catches any exceptions from the underlying Stream. It then logs the exception and returns a Tuple with the exception message included. The SolrStream has some code added to look for Tuples with an Exception message and then to throw an Exception passing along the exception message from the Tuple. This effectively allows exceptions to be passed through any number of distributed tiers. The nice thing about this design is that the SolrStream throws and the ExceptionStream catches. All other Streams in between can ignore exception handling entirely. No tests yet to prove this concept actually works, but it looks promising. was (Author: joel.bernstein): Added a patch that adds an Exception handling framework. The new ExceptionStream wraps a TupleStream and catches any exceptions from the underlying Stream. It then logs the exception and returns a Tuple with the exception message included. The SolrStream has some code added to look for Tuples with an Exception message and then to throw an Exception passing along the exception message from the Tuple. This effectively allow exceptions to be passed through any number of distributed tiers. The nice thing about this design is that the SolrStream throws and the ExceptionStream catches. All other Streams in between can ignore exception handling entirely. No tests yet to prove this concept actually works, but it looks promising. Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL - Key: SOLR-7441 URL: https://issues.apache.org/jira/browse/SOLR-7441 Project: Solr Issue Type: Improvement Affects Versions: 5.1 Reporter: Erick Erickson Assignee: Joel Bernstein Priority: Minor Attachments: SOLR-7441.patch, SOLR-7441.patch, SOLR-7441.patch It's harder than it could be to figure out what the error is when using Streaming Aggregation. For instance if you specify an fl parameter for a field that doesn't exist it's hard to figure out that's the cause. This is true even if you look in the Solr logs. I'm not quite sure whether it'd be possible to report this at the client level or not, but it seems at least we could repor something more helpful in the Solr logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7441) Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL
[ https://issues.apache.org/jira/browse/SOLR-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608428#comment-14608428 ] Joel Bernstein edited comment on SOLR-7441 at 6/30/15 3:03 PM: --- Added a patch that adds an Exception handling framework to the Streaming API. The new ExceptionStream wraps a TupleStream and catches any exceptions from the underlying Stream. It then logs the exception and returns a Tuple with the exception message included. The SolrStream has some code added to look for Tuples with an Exception message and then to throw an Exception passing along the exception message from the Tuple. This effectively allows exceptions to be passed through any number of distributed tiers. The nice thing about this design is that the SolrStream throws and the ExceptionStream catches. All other Streams in between can ignore exception handling entirely. No tests yet to prove this concept actually works, but it looks promising. was (Author: joel.bernstein): Added a patch that adds an Exception handling framework. The new ExceptionStream wraps a TupleStream and catches any exceptions from the underlying Stream. It then logs the exception and returns a Tuple with the exception message included. The SolrStream has some code added to look for Tuples with an Exception message and then to throw an Exception passing along the exception message from the Tuple. This effectively allows exceptions to be passed through any number of distributed tiers. The nice thing about this design is that the SolrStream throws and the ExceptionStream catches. All other Streams in between can ignore exception handling entirely. No tests yet to prove this concept actually works, but it looks promising. Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL - Key: SOLR-7441 URL: https://issues.apache.org/jira/browse/SOLR-7441 Project: Solr Issue Type: Improvement Affects Versions: 5.1 Reporter: Erick Erickson Assignee: Joel Bernstein Priority: Minor Attachments: SOLR-7441.patch, SOLR-7441.patch, SOLR-7441.patch It's harder than it could be to figure out what the error is when using Streaming Aggregation. For instance if you specify an fl parameter for a field that doesn't exist it's hard to figure out that's the cause. This is true even if you look in the Solr logs. I'm not quite sure whether it'd be possible to report this at the client level or not, but it seems at least we could repor something more helpful in the Solr logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7441) Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL
[ https://issues.apache.org/jira/browse/SOLR-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-7441: - Attachment: SOLR-7441.patch Added a patch that adds an Exception handling framework. The new ExceptionStream wraps a TupleStream and catches any exceptions from the underlying Stream. It then logs the exception and returns a Tuple with the exception message included. The SolrStream has some code added to look for Tuples with an Exception message and then to throw an Exception passing along the exception message from the Tuple. This effectively allow exceptions to be passed through any number of distributed tiers. The nice thing about this design is that the SolrStream throws and the ExceptionStream catches. All other Streams in between can ignore exception handling entirely. No tests yet to prove this concept actually works, but it looks promising. Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL - Key: SOLR-7441 URL: https://issues.apache.org/jira/browse/SOLR-7441 Project: Solr Issue Type: Improvement Affects Versions: 5.1 Reporter: Erick Erickson Assignee: Joel Bernstein Priority: Minor Attachments: SOLR-7441.patch, SOLR-7441.patch, SOLR-7441.patch It's harder than it could be to figure out what the error is when using Streaming Aggregation. For instance if you specify an fl parameter for a field that doesn't exist it's hard to figure out that's the cause. This is true even if you look in the Solr logs. I'm not quite sure whether it'd be possible to report this at the client level or not, but it seems at least we could repor something more helpful in the Solr logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
solr 4.5 replicated slaves don't delete old files in /index directory
Hi, I hope someone is aware of this bug and could point me to either old Jira ticket or advice on how it can be safely fixed ... In our production environment, with solr 4.5, we often see slave replicating down data from upstream into /index directory but NOT deleting older segment files ... We have enormous index, 1T+, so that adds to our problem, because we sometimes end up with 2T /index directory when fully optimized index is coming down ... the problem usually resolves itself in the next replication (or 2), but once we run out of disk, it gets in a vicious cycle Replicating this issue in dev/staging is very tricky so I hope someone knows this issue already. Possible solutions 1. manually deleting old segment files (need to write some code and don't know if safe) 2. would restart or issuing commits trigger deletion of the old files? Thanks, much appreciated!!!
[jira] [Commented] (SOLR-7723) Add book to website - ASESS 3rd ed
[ https://issues.apache.org/jira/browse/SOLR-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608697#comment-14608697 ] ASF subversion and git services commented on SOLR-7723: --- Commit 1688487 from [~dsmiley] in branch 'cms/trunk' [ https://svn.apache.org/r1688487 ] SOLR-7723: Book- Apache Solr Enterprise Search Server, Third Edition Add book to website - ASESS 3rd ed -- Key: SOLR-7723 URL: https://issues.apache.org/jira/browse/SOLR-7723 Project: Solr Issue Type: Task Components: website Reporter: David Smiley Assignee: David Smiley Attachments: SOLR-7723.patch, book_asess_3ed.jpg This is a patch to update Solr's resources page for the book Apache Solr Enterprise Search Server, Third Edition -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7723) Add book to website - ASESS 3rd ed
[ https://issues.apache.org/jira/browse/SOLR-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608712#comment-14608712 ] ASF subversion and git services commented on SOLR-7723: --- Commit 1688488 from [~dsmiley] in branch 'cms/trunk' [ https://svn.apache.org/r1688488 ] SOLR-7723: link-ified the website reference Add book to website - ASESS 3rd ed -- Key: SOLR-7723 URL: https://issues.apache.org/jira/browse/SOLR-7723 Project: Solr Issue Type: Task Components: website Reporter: David Smiley Assignee: David Smiley Attachments: SOLR-7723.patch, book_asess_3ed.jpg This is a patch to update Solr's resources page for the book Apache Solr Enterprise Search Server, Third Edition -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6645) BKD tree queries should use BitDocIdSet.Builder
Michael McCandless created LUCENE-6645: -- Summary: BKD tree queries should use BitDocIdSet.Builder Key: LUCENE-6645 URL: https://issues.apache.org/jira/browse/LUCENE-6645 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless When I was iterating on BKD tree originally I remember trying to use this builder (which makes a sparse bit set at first and then upgrades to dense if enough bits get set) and being disappointed with its performance. I wound up just making a FixedBitSet every time, but this is obviously wasteful for small queries. It could be the perf was poor because I was always .or'ing in DISIs that had 512 - 1024 hits each time (the size of each leaf cell in the BKD tree)? I also had to make my own DISI wrapper around each leaf cell... maybe that was the source of the slowness, not sure. I also sort of wondered whether the SmallDocSet in spatial module (backed by a SentinelIntSet) might be faster ... though it'd need to be sorted in the and after building before returning to Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6578) Geo3d: arcDistanceToShape() method may be useful
[ https://issues.apache.org/jira/browse/LUCENE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated LUCENE-6578: - Attachment: LUCENE-6578-dws.patch This is the patch I posted to ReviewBoard, and Karl accepted it. It's mostly about reducing code duplication around duplicate isWithin or some other methods that are overloaded with GeoPoint or Vector. I moved common implementations to base classes, and I added a Java 8 default method. To make back-porting easier I don't leverage the existence of that method; we might change that at a later point. I also eliminated redundant javadocs where I saw them. I also found/fixed some equals/hashcode problems -- notably SidedPlane was missing them. Suggested CHANGES.txt: {noformat} * LUCENE-6578: Geo3D can now compute the distance from a point to a shape, both inner distance and to an outside edge. Multiple distance algorithms are available. (Karl Wright, David Smiley) {noformat} I'll commit this later today unless there are objections. Geo3d: arcDistanceToShape() method may be useful Key: LUCENE-6578 URL: https://issues.apache.org/jira/browse/LUCENE-6578 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6578-dws.patch, LUCENE-6578.patch, LUCENE-6578.patch, LUCENE-6578.patch I've got an application that seems like it may need the ability to compute a new kind of arc distance, from a GeoPoint to the nearest edge/point of a GeoShape. Adding this method to the interface, and corresponding implementations, would increase the utility of the package for ranking purposes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)
[ https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608630#comment-14608630 ] Erick Erickson commented on SOLR-247: - _If_ you had all the arbitrary_tag_* fields, could you construct the proper query programmatically? Because you can get the list of all fields that are actually used by any indexed document as opposed to the fields defined in the schema. That's what allows the admin/schema browser to display its drop-down. It's probably unlikely that this functionality will be incorporated in Solr as per this JIRA based on the fact that no real action has happened on it for 6 years. Allow facet.field=* to facet on all fields (without knowing what they are) -- Key: SOLR-247 URL: https://issues.apache.org/jira/browse/SOLR-247 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Priority: Minor Labels: beginners, newdev Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, SOLR-247.patch, SOLR-247.patch I don't know if this is a good idea to include -- it is potentially a bad idea to use it, but that can be ok. This came out of trying to use faceting for the LukeRequestHandler top term collecting. http://www.nabble.com/Luke-request-handler-issue-tf3762155.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6645) BKD tree queries should use BitDocIdSet.Builder
[ https://issues.apache.org/jira/browse/LUCENE-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608723#comment-14608723 ] David Smiley commented on LUCENE-6645: -- FYI I originally used FixedBitSet in the RPT based spatial filters (technically OpenBitSet _originally_) because at that time there was no Sparse impl with a builder. That's also why SmallDocSet exists. I'm curious to see what you find here regarding the performance; I'd like to use the bitset builder and remove SmallDocSet. BKD tree queries should use BitDocIdSet.Builder --- Key: LUCENE-6645 URL: https://issues.apache.org/jira/browse/LUCENE-6645 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless When I was iterating on BKD tree originally I remember trying to use this builder (which makes a sparse bit set at first and then upgrades to dense if enough bits get set) and being disappointed with its performance. I wound up just making a FixedBitSet every time, but this is obviously wasteful for small queries. It could be the perf was poor because I was always .or'ing in DISIs that had 512 - 1024 hits each time (the size of each leaf cell in the BKD tree)? I also had to make my own DISI wrapper around each leaf cell... maybe that was the source of the slowness, not sure. I also sort of wondered whether the SmallDocSet in spatial module (backed by a SentinelIntSet) might be faster ... though it'd need to be sorted in the and after building before returning to Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7735) Look for solr.xml in Zookeeper by default
[ https://issues.apache.org/jira/browse/SOLR-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-7735: -- Attachment: SOLR-7735.patch Attaching patch. Tests and precommit SUCCESS. All mention of {{solr.solrxml.location}} is gone, added a {{WARN}} log if someone tries to use it. The sysprop is not documented in the refguide as of now, but still I mentioned the change in the Upgrading from Solr 5.2 section of CHANGES. Last chance for veto before committing in a few days. Look for solr.xml in Zookeeper by default - Key: SOLR-7735 URL: https://issues.apache.org/jira/browse/SOLR-7735 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 5.2.1 Reporter: Jan Høydahl Fix For: 5.3 Attachments: SOLR-7735.patch In SOLR-4718 support was added for reading {{solr.xml}} from ZK by providing system property {{solr.solrxml.location=zookeeper}}. If this property is not specified, solr.xml is read from SOLR_HOME. This issue is a spinoff from SOLR-7727 where start scripts do not support solr.xml in ZK. Instead of adding this to start scripts, suggest to simplify the whole logic: * If not in cloud mode, require {{solr.xml}} in SOLR_HOME as today * If in cloud mode, first look for {{solr.xml}} in ZK, but if not found, load from SOLR_HOME as today * Remove the need for {{solr.solrxml.location}} property -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6632) Geo3d: More accurate way of computing circle planes
[ https://issues.apache.org/jira/browse/LUCENE-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608636#comment-14608636 ] Karl Wright commented on LUCENE-6632: - [~dsmiley]: Here's the next one to consider. Geo3d: More accurate way of computing circle planes --- Key: LUCENE-6632 URL: https://issues.apache.org/jira/browse/LUCENE-6632 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6632.patch The Geo3d code that computes circle planes in GeoPath and GeoCircle is less accurate the smaller the circle. There's a better way of computing this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7733) remove/rename optimize references in the UI.
[ https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608661#comment-14608661 ] Hoss Man commented on SOLR-7733: Linking related SOLR-3141 where there was much discussion about terminology and backcompat and educating users and sensible default behavior. (i have no comment or opinion about anything expressed in this issue so far -- i have not read most details in this issue so far, i'm just linking the issues so all the relevant details are connected for folks who do have opinions and are willing to follow through on them) remove/rename optimize references in the UI. -- Key: SOLR-7733 URL: https://issues.apache.org/jira/browse/SOLR-7733 Project: Solr Issue Type: Improvement Components: UI Affects Versions: 5.3, Trunk Reporter: Erick Erickson Priority: Minor Attachments: SOLR-7733.patch Since optimizing indexes is kind of a special circumstance thing, what do we think about removing (or renaming) optimize-related stuff on the core admin and core overview pages? The optimize button is already gone from the core admin screen (was this intentional?). My personal feeling is that we should remove this entirely as it's too easy to think Of course I want my index optimized and look, this screen says my index isn't optimized, that must mean I should optimize it. The core admin screen and the core overview page both have an optimized checkmark, I propose just removing it from the overview page and on the core admin page changing it to Segment Count #. NOTE: the overview page already has a Segment Count entry. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-7723) Add book to website - ASESS 3rd ed
[ https://issues.apache.org/jira/browse/SOLR-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley closed SOLR-7723. -- Resolution: Fixed Add book to website - ASESS 3rd ed -- Key: SOLR-7723 URL: https://issues.apache.org/jira/browse/SOLR-7723 Project: Solr Issue Type: Task Components: website Reporter: David Smiley Assignee: David Smiley Attachments: SOLR-7723.patch, book_asess_3ed.jpg This is a patch to update Solr's resources page for the book Apache Solr Enterprise Search Server, Third Edition -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 13093 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13093/ Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.search.TestSearcherReuse.test Error Message: expected same:Searcher@26aca40c[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.3.0):c2) Uninverting(_1(5.3.0):c2)))} was not:Searcher@4b06683d[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.3.0):c2) Uninverting(_1(5.3.0):c2)))} Stack Trace: java.lang.AssertionError: expected same:Searcher@26aca40c[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.3.0):c2) Uninverting(_1(5.3.0):c2)))} was not:Searcher@4b06683d[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.3.0):c2) Uninverting(_1(5.3.0):c2)))} at __randomizedtesting.SeedInfo.seed([4AC567CE3D88EBB5:C29158149374864D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotSame(Assert.java:641) at org.junit.Assert.assertSame(Assert.java:580) at org.junit.Assert.assertSame(Assert.java:593) at org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247) at org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 890 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/890/ 1 tests failed. REGRESSION: org.apache.solr.TestDistributedSearch.test Error Message: Captured an uncaught exception in thread: Thread[id=55518, name=Thread-51232, state=RUNNABLE, group=TGRP-TestDistributedSearch] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=55518, name=Thread-51232, state=RUNNABLE, group=TGRP-TestDistributedSearch] at __randomizedtesting.SeedInfo.seed([75CE79592F10FF3D:FD9A468381EC92C5]:0) Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:43311/pllz/yk/collection1: java.lang.NullPointerException at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:42) at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:744) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:727) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:410) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2058) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:648) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([75CE79592F10FF3D]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958) at org.apache.solr.TestDistributedSearch$1.run(TestDistributedSearch.java:1110) Build Log: [...truncated 11365 lines...] [junit4] Suite: org.apache.solr.TestDistributedSearch [junit4] 2 Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_75CE79592F10FF3D-001/init-core-data-001 [junit4] 2 1780382 INFO (SUITE-TestDistributedSearch-seed#[75CE79592F10FF3D]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) [junit4] 2 1780382 INFO (SUITE-TestDistributedSearch-seed#[75CE79592F10FF3D]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /pllz/yk [junit4] 2 1780751 INFO
[GitHub] lucene-solr pull request: [incomplete] SOLR-5730: make SortingMerg...
GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/176 [incomplete] SOLR-5730: make SortingMergePolicy/EarlyTerminatingSortingCollector configurable incomplete. for https://issues.apache.org/jira/i#browse/SOLR-5730 ticket. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-sort-outline Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/176.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #176 commit 5984692c660b32ddec0260e5a096b7b7eea5ea8d Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * added static isSorted variant to SortingMergePolicy * added SortingMergePolicy-free EarlyTerminatingSortingCollector constructor variant * added SortingMergePolicy-free EarlyTerminatingSortingCollector.canEarlyTerminate variant * corresponding changes to TestEarlyTerminatingSortingCollector (randomly choose between constructor and canEarlyTerminate variants) commit 78de6620a9f50ab632c1fc240a336137acf9199f Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T14:22:29Z [incomplete] SOLR-5730: make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr summary of changes so far: * schema.xml: added optional mergeSortSpec field to [Managed]IndexSchema * solrconfig.xml: SolrIndexConfig.buildMergePolicy to constructs a SortingMergePolicy if the useSortingMergePolicy flag is set * optional segmentTerminateEarly parameter added to CommonParams and QueryComponent (defaulted to existing behaviour) * SolrIndexSearcher: constructs EarlyTerminatingSortingCollector if segmentTerminateEarly flag is set (and requested sort is compatible with the mergeSortSpec) still incomplete: * IndexSchema: convert/parse 'String mergeSortSpecString;' into 'SortSpec mergeSortSpec;' * queryResult caching to consider the segmentTerminateEarly flag (a segmentTerminateEarly=false search should not get cached results from a segmentTerminateEarly=true search) * documentation/example for the segmentTerminateEarly query parameter (and the new optional schema.xml/solrconfig.xml elements) * test cases --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr
[ https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1460#comment-1460 ] ASF GitHub Bot commented on SOLR-5730: -- GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/176 [incomplete] SOLR-5730: make SortingMergePolicy/EarlyTerminatingSortingCollector configurable incomplete. for https://issues.apache.org/jira/i#browse/SOLR-5730 ticket. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-sort-outline Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/176.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #176 commit 5984692c660b32ddec0260e5a096b7b7eea5ea8d Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * added static isSorted variant to SortingMergePolicy * added SortingMergePolicy-free EarlyTerminatingSortingCollector constructor variant * added SortingMergePolicy-free EarlyTerminatingSortingCollector.canEarlyTerminate variant * corresponding changes to TestEarlyTerminatingSortingCollector (randomly choose between constructor and canEarlyTerminate variants) commit 78de6620a9f50ab632c1fc240a336137acf9199f Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T14:22:29Z [incomplete] SOLR-5730: make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr summary of changes so far: * schema.xml: added optional mergeSortSpec field to [Managed]IndexSchema * solrconfig.xml: SolrIndexConfig.buildMergePolicy to constructs a SortingMergePolicy if the useSortingMergePolicy flag is set * optional segmentTerminateEarly parameter added to CommonParams and QueryComponent (defaulted to existing behaviour) * SolrIndexSearcher: constructs EarlyTerminatingSortingCollector if segmentTerminateEarly flag is set (and requested sort is compatible with the mergeSortSpec) still incomplete: * IndexSchema: convert/parse 'String mergeSortSpecString;' into 'SortSpec mergeSortSpec;' * queryResult caching to consider the segmentTerminateEarly flag (a segmentTerminateEarly=false search should not get cached results from a segmentTerminateEarly=true search) * documentation/example for the segmentTerminateEarly query parameter (and the new optional schema.xml/solrconfig.xml elements) * test cases make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr -- Key: SOLR-5730 URL: https://issues.apache.org/jira/browse/SOLR-5730 Project: Solr Issue Type: New Feature Reporter: Christine Poerschke Priority: Minor Example configuration: solrconfig.xml {noformat} mergeSorter class=org.apache.solr.update.DefaultMergeSorterFactory/ {noformat} schema.xml {noformat} mergeSorterKey class=org.apache.solr.schema.SingleFieldSorterFactory str name=fieldNametimestamp/str bool name=ascendingfalse/bool /mergeSorterKey {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13272 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13272/ Java: 64bit/jdk1.9.0-ea-b60 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: commitWithin did not work on node: http://127.0.0.1:48815/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:48815/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([EB42BF33E64AC3B0:631680E948B6AE48]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:332) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
Re:Solr Config MergePolicy/MergeScheduler Naming Bug?
Hi, Follow-on question, also re: SolrIndexConfig's toMap method. The constructor reads /mergedSegmentWarmer and /infoStream and /infoStream/@file elements but the toMap method does not write them. Would that be intended or is it maybe an unintended omission? If intended would be great to have a comment in toMap to clarify why/which elements are skipped, so that if/when a new element is added its easier to determine if toMap should write the new element also. Happy to open a JIRA with a patch, or to add to Mike's JIRA/patch. Christine From: dev@lucene.apache.org At: Jun 30 2015 21:03:19 To: dev@lucene.apache.org Subject: Re:Solr Config MergePolicy/MergeScheduler Naming Bug? Concur, that looks like a copy/paste bug. The JIRA(s) associated with the commit are no longer open, so a new JIRA with the one-line patch would seem the next logical step. This appears to affect trunk but not branch_5x or branch_4x. Well spotted! From: dev@lucene.apache.org At: Jun 30 2015 20:34:48 To: dev@lucene.apache.org Subject: Re:Solr Config MergePolicy/MergeScheduler Naming Bug? I was looking through code for unrelated reasons and this line stuck out to me: https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L180 if(mergeSchedulerInfo != null) m.put(mergeScheduler,mergeSchedulerInfo.toMap()); if(mergePolicyInfo != null) m.put(mergeScheduler,mergePolicyInfo.toMap()); Are they both supposed to be using the mergeScheduler key? If not, happy to open a JIRA and provide the one-line patch. Mike
[jira] [Commented] (SOLR-7233) rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH
[ https://issues.apache.org/jira/browse/SOLR-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609041#comment-14609041 ] Upayavira commented on SOLR-7233: - Dunno if this works - if we named it bin/zookeeper, we could (eventually) add a -cmd start -id 1 -port 2181 option that would actually start a zookeeper instance using the classes already in the Solr codebase. rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH Key: SOLR-7233 URL: https://issues.apache.org/jira/browse/SOLR-7233 Project: Solr Issue Type: Task Components: scripts and tools Affects Versions: 4.10 Reporter: Hari Sekhon Priority: Trivial Mac is case insensitive on CLI search so zkcli.sh clashes with zkCli.sh from ZooKeeper when both are in the $PATH, ruining commands for one or the other unless the script path is qualified. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7740) tweak TestConfigOverlay.testPaths - auto[Soft]Commit(./)max(Docs|Time)
[ https://issues.apache.org/jira/browse/SOLR-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609051#comment-14609051 ] ASF GitHub Bot commented on SOLR-7740: -- GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/177 SOLR-7740: tweak TestConfigOverlay.testPaths for https://issues.apache.org/jira/i#browse/SOLR-7740 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-test-config-overlay Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/177.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #177 commit 6f18820e5394bcc4b79be92b8db4d2dde10f386b Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-30T20:15:51Z SOLR-: tweak TestConfigOverlay.testPaths - auto[Soft]Commit(./)max(Docs|Time) The autoCommit(./)max(Docs|Time) asserts appear twice and from quick look at ConfigOverlay the second autoCommit could have been intended to be autoSoftCommit instead. tweak TestConfigOverlay.testPaths - auto[Soft]Commit(./)max(Docs|Time) -- Key: SOLR-7740 URL: https://issues.apache.org/jira/browse/SOLR-7740 Project: Solr Issue Type: Test Reporter: Christine Poerschke Priority: Minor github pull request to follow -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr
[ https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608899#comment-14608899 ] Christine Poerschke commented on SOLR-5730: --- Uploaded [incomplete rebased-against-current-trunk changes|https://github.com/apache/lucene-solr/pull/176] and updated example configuration to match. The proposed solr changes require the changes proposed in LUCENE-6646 because at the point that solr needs to construct an {{EarlyTerminatingSortingCollector}} object it doesn't have access to the {{SortingMergePolicy}} in use, commit message about has further details. make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr -- Key: SOLR-5730 URL: https://issues.apache.org/jira/browse/SOLR-5730 Project: Solr Issue Type: New Feature Reporter: Christine Poerschke Priority: Minor Example configuration: solrconfig.xml {noformat} useSortingMergePolicytrue/useSortingMergePolicy {noformat} schema.xml {noformat} mergeSortSpectimestamp desc/mergeSortSpec {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7233) rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH
[ https://issues.apache.org/jira/browse/SOLR-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609011#comment-14609011 ] Jan Høydahl commented on SOLR-7233: --- +1 Let us rename solr's {{zkCli.sh}} to something unique, such as {{solrZkTool.sh}}. Could we move it to {{bin/}} as well? rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH Key: SOLR-7233 URL: https://issues.apache.org/jira/browse/SOLR-7233 Project: Solr Issue Type: Task Components: scripts and tools Affects Versions: 4.10 Reporter: Hari Sekhon Priority: Trivial Mac is case insensitive on CLI search so zkcli.sh clashes with zkCli.sh from ZooKeeper when both are in the $PATH, ruining commands for one or the other unless the script path is qualified. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: support SortingMergePolicy-free use of E...
GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/175 support SortingMergePolicy-free use of EarlyTerminatingSortingCollector for https://issues.apache.org/jira/i#browse/LUCENE-6646 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-sort-outline-lucene Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/175.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #175 commit 5984692c660b32ddec0260e5a096b7b7eea5ea8d Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * added static isSorted variant to SortingMergePolicy * added SortingMergePolicy-free EarlyTerminatingSortingCollector constructor variant * added SortingMergePolicy-free EarlyTerminatingSortingCollector.canEarlyTerminate variant * corresponding changes to TestEarlyTerminatingSortingCollector (randomly choose between constructor and canEarlyTerminate variants) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6646) support SortingMergePolicy-free use of EarlyTerminatingSortingCollector
Christine Poerschke created LUCENE-6646: --- Summary: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector Key: LUCENE-6646 URL: https://issues.apache.org/jira/browse/LUCENE-6646 Project: Lucene - Core Issue Type: Wish Reporter: Christine Poerschke Priority: Minor motivation and summary of proposed changes to follow via github pull request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6646) support SortingMergePolicy-free use of EarlyTerminatingSortingCollector
[ https://issues.apache.org/jira/browse/LUCENE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608882#comment-14608882 ] ASF GitHub Bot commented on LUCENE-6646: GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/175 support SortingMergePolicy-free use of EarlyTerminatingSortingCollector for https://issues.apache.org/jira/i#browse/LUCENE-6646 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-sort-outline-lucene Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/175.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #175 commit 5984692c660b32ddec0260e5a096b7b7eea5ea8d Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * added static isSorted variant to SortingMergePolicy * added SortingMergePolicy-free EarlyTerminatingSortingCollector constructor variant * added SortingMergePolicy-free EarlyTerminatingSortingCollector.canEarlyTerminate variant * corresponding changes to TestEarlyTerminatingSortingCollector (randomly choose between constructor and canEarlyTerminate variants) support SortingMergePolicy-free use of EarlyTerminatingSortingCollector --- Key: LUCENE-6646 URL: https://issues.apache.org/jira/browse/LUCENE-6646 Project: Lucene - Core Issue Type: Wish Reporter: Christine Poerschke Priority: Minor motivation and summary of proposed changes to follow via github pull request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr
[ https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-5730: -- Description: Example configuration: solrconfig.xml {noformat} useSortingMergePolicytrue/useSortingMergePolicy {noformat} schema.xml {noformat} mergeSortSpectimestamp desc/mergeSortSpec {noformat} was: Example configuration: solrconfig.xml {noformat} mergeSorter class=org.apache.solr.update.DefaultMergeSorterFactory/ {noformat} schema.xml {noformat} mergeSorterKey class=org.apache.solr.schema.SingleFieldSorterFactory str name=fieldNametimestamp/str bool name=ascendingfalse/bool /mergeSorterKey {noformat} make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr -- Key: SOLR-5730 URL: https://issues.apache.org/jira/browse/SOLR-5730 Project: Solr Issue Type: New Feature Reporter: Christine Poerschke Priority: Minor Example configuration: solrconfig.xml {noformat} useSortingMergePolicytrue/useSortingMergePolicy {noformat} schema.xml {noformat} mergeSortSpectimestamp desc/mergeSortSpec {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org