[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-12-ea+12) - Build # 7581 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7581/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC

13 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([88C692CEC1F4D61A:747C46FA39D467D0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth(TestMiniSolrCloudClusterSSL.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-10-23 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661728#comment-16661728
 ] 

Tim Underwood commented on LUCENE-8374:
---

[~toke] You are correct I have not seen any noticeable performance differences 
on my index with 8M docs.  However, I've been looking into converting it into 
parent/child docs which would expand it to ~300 million total docs so I suspect 
this patch would help out performance in that case.

I've included some updated numbers from re-running my tests and making use of 
the lucene8374 url parameter to enable/disable the caches.

I've been running my informal benchmarking on my laptop which has 32GB of RAM.  
Activity Monitor reports 7.5GB of "Cached Files" and I see very little disk 
activity when running my tests so I suspect everything needed for faceting is 
in memory.
h2. Test 1 - Faceting on data for a single vehicle (same test I previously ran)

Index Size: 35.41 GB

Total Documents: 84,159,576

Documents matching query: 3,447

 
||lucene8374 Caches||Requests per second||
|All Disabled|88/second|
|*All Enabled*|*266/second*|

Note: I'm using Apache Bench (ab) for this test with a static query, 
concurrency of 10, and 5,000 total requests.  This is really just testing the 
performance of calculating the facets since the set of matching documents 
should be cached by Solr.
h2. Test 2 - Faceting on most of the index

Index Size: 35.41 GB

Total Documents: 84,159,576

Documents Matching Query: 73,241,182

 
||lucene8374 Caches||Time for Single Request||
|All Disabled|~117 seconds|
|All Enabled|~117 seconds|

 

Note: This test is so slow that I only run one query at a time.  This is NOT an 
actual use case for me.  I was just curious if there was any performance 
difference.

 

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 190 - Still Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/190/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([7AE2C0271ACC0C42]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14845 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J0/temp/solr.search.QueryEqualityTest_7AE2C0271ACC0C42-001/init-core-data-001
   [junit4]   2> 5758717 WARN  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=43 numCloses=43
   [junit4]   2> 5758717 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 5758719 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 5758719 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 5758731 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 5758732 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/collection1/lib,
 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 5759067 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 5759169 INFO  
(SUITE-QueryEqualityTest-seed#[7AE2C0271ACC0C42]-worker) [] 

[jira] [Commented] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter

2018-10-23 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661683#comment-16661683
 ] 

Lucene/Solr QA commented on SOLR-12895:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 55s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.search.QueryEqualityTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945256/SOLR-12895.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / e083b15 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/210/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/210/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/210/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SurroundQParserPlugin support for UnifiedHighlighter
> 
>
> Key: SOLR-12895
> URL: https://issues.apache.org/jira/browse/SOLR-12895
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12895.patch, SOLR-12895.patch
>
>
> The "surround" QParser doesn't work with the UnififedHighlighter -- 
> LUCENE-8492.  However I think we can overcome this by having Solr's QParser 
> extend getHighlightQuery and rewrite itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12868) Request forwarding for v2 API is broken

2018-10-23 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-12868:
-

Assignee: Noble Paul

> Request forwarding for v2 API is broken
> ---
>
> Key: SOLR-12868
> URL: https://issues.apache.org/jira/browse/SOLR-12868
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, v2 API
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> I was working with Noble Paul to investigate test failures seen in SOLR-12806 
> where we found this issue. Due to a bug, replicas of a collection weren't 
> spread evenly so there were some nodes which did not have any replicas at 
> all. In such cases, when a v2 API call hits an empty node, it is not 
> forwarded to the right path on the remote node causing test failures.
> e.g. a call to {{/c/collection/_introspect}} is forwarded as 
> {{http://127.0.0.1:63326/solr/collection1/_introspect?method=POST=javabin=2=}}
>  and {{/c/collection1/abccdef}} is forwarded as 
> {{http://127.0.0.1:63326/solr/collection1/abccdef}}
> In summary, a remote query for v2 API from an empty node is converted to a v1 
> style call which may not be a valid path. We should forward v2 API calls 
> as-is without changing the paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 970 - Still Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/970/

2 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:34607_solr, 
127.0.0.1:35609_solr, 127.0.0.1:38255_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"http://127.0.0.1:45490/solr;,   
"node_name":"127.0.0.1:45490_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"http://127.0.0.1:45490/solr;,   
"node_name":"127.0.0.1:45490_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:34607_solr, 127.0.0.1:35609_solr, 127.0.0.1:38255_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"http://127.0.0.1:45490/solr;,
  "node_name":"127.0.0.1:45490_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"http://127.0.0.1:45490/solr;,
  "node_name":"127.0.0.1:45490_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([7F9063EAEFBD3359:1586023A874F7993]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23083 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23083/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([71955E0AEC113643]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([71955E0AEC113643]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Resolved] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12754.
-
   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: master (8.0)
   7.6

> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661601#comment-16661601
 ] 

ASF subversion and git services commented on SOLR-12754:


Commit 3e89b7a771639aacaed6c21406624a2b27231dd7 in lucene-solr's branch 
refs/heads/jira/http2 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e89b7a ]

SOLR-12754: New hl.weightMatches for UnifiedHighlighter WEIGHT_MATCHES
(defaults to true in master/8)


> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8541) Fix ant beast to not overwrite junit xml results for each beast.iters iteration.

2018-10-23 Thread Mark Miller (JIRA)
Mark Miller created LUCENE-8541:
---

 Summary: Fix ant beast to not overwrite junit xml results for each 
beast.iters iteration.
 Key: LUCENE-8541
 URL: https://issues.apache.org/jira/browse/LUCENE-8541
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller


We should write the xml output files to different i subdirectories or something 
so that all the results are available after the run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661596#comment-16661596
 ] 

ASF subversion and git services commented on SOLR-5004:
---

Commit d799fd53c7cd3a83442d6010dc48802d2fd8c7fb in lucene-solr's branch 
refs/heads/jira/http2 from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d799fd5 ]

SOLR-5004: Allow a shard to be split into 'n' sub-shards using the collections 
API


> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11812) Remove backward compatibility of old LIR implementation in 8.0

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661598#comment-16661598
 ] 

ASF subversion and git services commented on SOLR-11812:


Commit 7512cd9425319fb620c1992053a5d4be7cd9229d in lucene-solr's branch 
refs/heads/jira/http2 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7512cd9 ]

SOLR-11812: Remove LIROnShardRestartTest since the transition from old lir to 
new lir is no longer supported


> Remove backward compatibility of old LIR implementation in 8.0
> --
>
> Key: SOLR-11812
> URL: https://issues.apache.org/jira/browse/SOLR-11812
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-11812.patch
>
>
> My plan is commit SOLR-11702 on the next 7.x release. We have to support both 
> old and the new design so users can do rolling updates. 
> This makes code base very complex, in 8.0 we do not have to support rolling 
> updates, so this issue is created to remind us to remove all the old LIR 
> implementation in 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-23 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661594#comment-16661594
 ] 

David Smiley commented on SOLR-12638:
-

Hmm; I rather like this idea -- make it mandatory that all child doc IDs start 
with a root doc ID then an exclamation then whatever.

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661599#comment-16661599
 ] 

ASF subversion and git services commented on SOLR-12879:


Commit 9df96d2530ed7098549cbd8bda2b347f8c26042b in lucene-solr's branch 
refs/heads/jira/http2 from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9df96d2 ]

SOLR-12879 - added missing attribution in CHANGES.txt


> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.filter.adoc.fragment, minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661600#comment-16661600
 ] 

ASF subversion and git services commented on SOLR-12879:


Commit 2e757f6c257687ab713f88b6a07cf4a355e4cf66 in lucene-solr's branch 
refs/heads/jira/http2 from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e757f6 ]

SOLR-12879 - registered MinHashQParserPlugin to QParserPlugin as min_hash


> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.filter.adoc.fragment, minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12829) Add plist (parallel list) Streaming Expression

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661595#comment-16661595
 ] 

ASF subversion and git services commented on SOLR-12829:


Commit fcaea07f3c8cba34906ca02f40fb1d2c40badc08 in lucene-solr's branch 
refs/heads/jira/http2 from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcaea07 ]

SOLR-12829: Add plist (parallel list) Streaming Expression


> Add plist (parallel list) Streaming Expression
> --
>
> Key: SOLR-12829
> URL: https://issues.apache.org/jira/browse/SOLR-12829
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12829.patch, SOLR-12829.patch
>
>
> The *plist* Streaming Expression wraps any number of streaming expressions 
> and opens them in parallel. The results of each of the streams are then 
> iterated in the order they appear in the list. Since many streams perform 
> heavy pushed down operations when opened, like the FacetStream, this will 
> result in the parallelization of these operations. For example plist could 
> wrap several facet() expressions and open them each in parallel, which would 
> cause the facets to be run in parallel, on different replicas in the cluster. 
> Here is sample syntax:
> {code:java}
> plist(tuple(facet1=facet(...)), 
>   tuple(facet2=facet(...)),
>   tuple(facet3=facet(...))) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661597#comment-16661597
 ] 

ASF subversion and git services commented on SOLR-11522:


Commit 576d28f643a89de832b59a783ce729402d70fb9f in lucene-solr's branch 
refs/heads/jira/http2 from noble
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=576d28f ]

SOLR-11522: Moved the _get methods to a separate interafce and keep MapWriter 
clean


> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> It is possible that a cluster is unbalanced even if it is not breaking any of 
> the policy rules. Some nodes may have very little load while some others may 
> be heavily loaded. So, it is possible to move replicas around so that the 
> load is more evenly distributed. This is going to be driven by preferences. 
> The way we arrive at these suggestions is going to be as follows
>  # Sort the nodes according to the given preferences
>  # Choose a replica from the most loaded node ({{source-node}}) 
>  # try adding them to the least loaded node ({{target-node}})
>  # See if it breaks any policy rules. If yes , try another {{target-node}} 
> (go to #3)
>  # If no policy rules are being broken, present this as a {{suggestion}} . 
> The suggestion contains the following information
>  #* The {{source-node}} and {{target-node}} names
>  #* The actual v2 command that can be run to effect the operation
>  # Go to step #1
>  # Do this until the a replicas can be moved in such a way that the {{target 
> node}} is more loaded than the {{source-node}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-23 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661581#comment-16661581
 ] 

Yonik Seeley commented on SOLR-12638:
-

Somewhat related: perhaps it should be best practice to include the parent 
document id in the child document id (with a "!" separator).  Things should 
just then work for anyone following this convention with the default 
compositeRouter.  For example, "id:mybook!myreview".  The ability to specify 
_route_ explicitly should always be there of course.
 

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12908) Add a default set of cluster preferences

2018-10-23 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661542#comment-16661542
 ] 

Shalin Shekhar Mangar commented on SOLR-12908:
--

We already have them? See SOLR-11051.

>From Policy.java:
{code}
 public static final List DEFAULT_PREFERENCES = 
Collections.unmodifiableList(
  Arrays.asList(
  // NOTE - if you change this, make sure to update the 
solrcloud-autoscaling-overview.adoc which
  // lists the default preferences
  new Preference((Map) Utils.fromJSONString("{minimize 
: cores, precision:1}")),
  new Preference((Map) Utils.fromJSONString("{maximize 
: freedisk}";
{code}

> Add a default set of cluster preferences
> 
>
> Key: SOLR-12908
> URL: https://issues.apache.org/jira/browse/SOLR-12908
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> Similar to SOLR-12845 where we want to add a set of default cluster policies, 
> we should add some default cluster preferences as well
>  
> We should always be trying to minimze cores , maximize freedisk for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-23 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661540#comment-16661540
 ] 

David Smiley commented on SOLR-12638:
-

The UpdateLog and TransactionLog probably need no changes for this feature.  
The feature interacts with those classes indirectly via RealTimeGetComponent 
which is modified in this patch.  I'm not sure the patch itself would 
necessarily get at my question; my question is more architectural / conceptual.

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12259) Robustly upgrade indexes

2018-10-23 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661538#comment-16661538
 ] 

Erick Erickson commented on SOLR-12259:
---

I was thinking about  this on the way back from Activate. One of the issues 
we'll have is the fact that it'd be a mess to support arbitrary upgrade paths 
for all the reasons in LUCENE-7976. Doing "whatever it can" is so fraught.

THIS IS A STRAW MAN PROPOSAL. Feel free to shoot holes in it.

In essence, this could be thought of as using custom merge policies do "do the 
right thing", where the "right thing" varied (and will continue to vary going 
forward).

Here's what I came up with as design goals:
 > transform _all_ segments in an core.
 > can upgrade collections/cores individually even if collections shared a 
 > configset
 > extensible in future
 > can deal with "safe" X+2 upgrades if there ever are any.
 > should not require restarting Solr
 > should not require special solrconfig.xml changes
 > _may_ require enabling the new end-point. Possibly a new config API? Maybe 
 > require a config API call to enable/disable?

What I have in mind is a new request handler that 

> locks the index for updates until done. I'm not horribly comfortable with 
> this but it would circumvent a world of problems.
 > applied a (possibly custom) merge policy that would implement what's desired 
 > _on all segments without merging._ This is essentially a "singleton merge" 
 > on each segment regardless of its state. We could, of course, skip segments 
 > that didn't require the transformation. This is somewhat along the lines of 
 > UninvertDocValuesMergePolicyFactory
> A prime candidate we _would_ supply would be upgrade to docValues fields.

Note that the merge policy in effect at the client would not be changed at all. 
 Say I was running TMP. This would come in completely around the end and not 
change that at all. We'd probably have to supply a new merge policy to be used 
by this end-point.

The reason I don't want to merge segments is that it would get weird having to 
make the different merge policies do the right thing, NoMergePolicy is 
particularly problematic ;)

The simple case here would require a full rewrite of all segments for each 
transformation, which is a drawback. OTOH, until we have examples of multiple 
transformations we want to happen, maybe we can go with it for now.

Conceivably this could be used for special-purpose upgraded with limited scope 
that could do the X->X+2 upgrade. I have no concrete examples of what would be 
safe and I _certainly_ don't want to distribute any such thing as part of Solr. 
Having the mechanism in place could allow users to make their own (at their own 
risk). Or call Uwe...

Comments?

> Robustly upgrade indexes
> 
>
> Key: SOLR-12259
> URL: https://issues.apache.org/jira/browse/SOLR-12259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> The general problem statement is that the current upgrade path is trappy and 
> cumbersome.  It would be a great help "in the field" to make the upgrade 
> process less painful.
> Additionally one of the most common things users want to do is enable 
> docValues, but currently they often have to re-index.
> Issues:
> 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go 
> to 7x all the segments have been rewritten in 6x format. Say I have a segment 
> at max size that has no deletions. It'll never be rewritten until it has 
> deleted docs. And perhaps 50% deleted docs currently.
> 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad.
> 3> in a large distributed system, running IndexUpgraderTool on all the nodes 
> is cumbersome even if <2> is acceptable.
> 4> Users who realize specifying docValues on a field would be A Good Thing 
> have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be 
> nice to be able to have this done all at once without forceMerging to one 
> segment.
> Proposal:
> Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. 
> It will make TMP respect max segments size so can avoid forceMerges that 
> result in one segment. What it does _not_ do is rewrite segments with zero 
> (or a small percentage) deleted documents.
> So it  doesn't seem like a huge stretch to be able to specify to TMP the 
> option to rewrite segments that have no deleted documents. Perhaps a new 
> parameter to optimize?
> This would likely require another change to TMP or whatever.
> So upgrading to a new solr would look like
> 1> install the new Solr
> 2> execute 
> "http://node:port/solr/collection_or_core/update?optimize=true=true;
> What's not clear 

Test Harness behaviour on a package run

2018-10-23 Thread Varun Thacker
I wanted to run all tests within one package so I ran it like this

ant clean test "-Dtests.class=org.apache.solr.search.facet.*"

The test run fails because the harness is trying to run DebugAgg as it's a
public class while it's not really a test class.

   [junit4] Tests with failures [seed: EB7B560286FA14D0]:
   [junit4]   - org.apache.solr.search.facet.DebugAgg.initializationError
   [junit4]   - org.apache.solr.search.facet.DebugAgg.initializationError


Is there a way to avoid this?


[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1161 - Failure

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1161/

No tests ran.

Build Log:
[...truncated 23268 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2435 links (1987 relative) to 3184 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Updated] (SOLR-12902) Block Expensive Queries custom Solr component

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12902:
-
Description: 
Added a Block Expensive Queries custom Solr component ( 
[https://github.com/apache/lucene-solr/pull/47|https://github.com/apache/lucene-solr/pull/477)]
 ) :
 * This search component can be plugged into your SearchHandler if you would 
like to block some well known expensive queries.
 * The queries that are blocked and failed by component currently are deep 
pagination queries as they are known to consume lot of memory and CPU. These 
are 

 * 
 ** queries with a start offset which is greater than the configured 
maxStartOffset config parameter value
 ** queries with a row param value which is greater than the configured 
maxRowsFetch config parameter value

  was:
Added a Block Expensive Queries custom Solr component 
([https://github.com/apache/lucene-solr/pull/477)] :
 * This search component can be plugged into your SearchHandler if you would 
like to block some well known expensive queries.
 * The queries that are blocked and failed by component currently are deep 
pagination queries as they are known to consume lot of memory and CPU. These 
are 

 * 
 ** queries with a start offset which is greater than the configured 
maxStartOffset config parameter value
 ** queries with a row param value which is greater than the configured 
maxRowsFetch config parameter value


> Block Expensive Queries custom Solr component
> -
>
> Key: SOLR-12902
> URL: https://issues.apache.org/jira/browse/SOLR-12902
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tirth Rajen Mehta
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Added a Block Expensive Queries custom Solr component ( 
> [https://github.com/apache/lucene-solr/pull/47|https://github.com/apache/lucene-solr/pull/477)]
>  ) :
>  * This search component can be plugged into your SearchHandler if you would 
> like to block some well known expensive queries.
>  * The queries that are blocked and failed by component currently are deep 
> pagination queries as they are known to consume lot of memory and CPU. These 
> are 
>  * 
>  ** queries with a start offset which is greater than the configured 
> maxStartOffset config parameter value
>  ** queries with a row param value which is greater than the configured 
> maxRowsFetch config parameter value



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2120 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2120/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([DC71D39D151F655D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([DC71D39D151F655D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2895 - Still Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2895/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([40E6483843AE2CD1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12492 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.search.QueryEqualityTest_40E6483843AE2CD1-001/init-core-data-001
   [junit4]   2> 341836 WARN  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 341836 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 341838 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 341855 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 341855 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 341856 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/lib,
 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 342226 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 342606 INFO  
(SUITE-QueryEqualityTest-seed#[40E6483843AE2CD1]-worker) [] 
o.a.s.s.IndexSchema [null] Schema name=test
   

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2967 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2967/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:38057_solr, 
127.0.0.1:41095_solr, 127.0.0.1:41859_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:46109/solr;,   
"node_name":"127.0.0.1:46109_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46109/solr;,   
"node_name":"127.0.0.1:46109_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:38057_solr, 127.0.0.1:41095_solr, 127.0.0.1:41859_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46109/solr;,
  "node_name":"127.0.0.1:46109_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46109/solr;,
  "node_name":"127.0.0.1:46109_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([BEF0DA2F99A231F8:D4E6BBFFF1507B32]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[jira] [Updated] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12894:
-
Attachment: SOLR-12894.patch

> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12894.patch, SOLR-12894.patch
>
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12894:
-
Attachment: SOLR-12894.patch

> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12894.patch
>
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661485#comment-16661485
 ] 

Varun Thacker commented on SOLR-12894:
--

How does this change look?

> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12894.patch
>
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-10-23 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661480#comment-16661480
 ] 

Mark Miller commented on SOLR-12801:


I want to start having a place for unit tests and integration tests - maybe we 
can work on both at once?

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12909) Fix all tests in org.apache.solr.update and begin a defense of them.

2018-10-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12909:
--

 Summary: Fix all tests in org.apache.solr.update and begin a 
defense of them.
 Key: SOLR-12909
 URL: https://issues.apache.org/jira/browse/SOLR-12909
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-23 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661479#comment-16661479
 ] 

Cao Manh Dat commented on SOLR-12638:
-

Hi guys, I don't know much about join code, but skimming through the patch I 
don't see any changes for {{UpdateLog}} and {{TransactionLog}}? I need more 
time on reviewing the solution and the patch for ensuring its correctness. 

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661475#comment-16661475
 ] 

Shawn Heisey commented on SOLR-12894:
-

bq. we shouldn't be in the business of recommending one over the other. All we 
should say is both OpenJDK and Oracle JDK are well tested and both work fine.

Sounds good.  Add something like "You'll want to be sure that the license for 
the Java version you choose will meet your needs" ... to be re-worded as 
necessary so it flows well.  So we draw attention to licensing without an 
explicit recommendation.


> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-10-23 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661473#comment-16661473
 ] 

Varun Thacker commented on SOLR-12801:
--

{quote}I'm going to start addressing tests by package (search package is first).
{quote}
I think today a lot of tests get lumped in the search package or cloud package 
when it perfectly deserves it's own package. We could address that in a 
separate Jira as well ( SOLR-12793 is one such example )

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-23 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661469#comment-16661469
 ] 

Varun Thacker commented on SOLR-12894:
--

So turns out that 
[https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java] was 
updated at some point and today we have 
[https://lucene.apache.org/solr/guide/7_5/solr-system-requirements.html]

If we are to reccomend users to use OpenJDK maybe we should have a link so that 
users can download them on Windows , Linux and Mac? 
[https://openjdk.java.net/install/] only has packages for linux . 

 

The other way to look at it is , we shouldn't be in the business of 
recommending one over the other. All we should say is both OpenJDK and Oracle 
JDK are well tested and both work fine. Users can make their choice based on 
that information.

> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-10-23 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661463#comment-16661463
 ] 

Mark Miller commented on SOLR-12801:


I'm going to start addressing tests by package (search package is first). Once 
I have some more tools and info and code to share, hopefully some others can 
join. Regardless, I may be trying to call people in for specific tests.

There is a good chance beasting test reports will be back eventually and more 
useful than ever as well.

I'll post my first patch (mainly focused on making the Overseer queue mockable) 
very soon.

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 896 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/896/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:60349/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:60349/solr
at 
__randomizedtesting.SeedInfo.seed([FB59DEAF28878BD1:3AA9A70305D74176]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:60664/solr

Stack Trace:

[jira] [Updated] (SOLR-12908) Add a default set of cluster preferences

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12908:
-
Component/s: AutoScaling

> Add a default set of cluster preferences
> 
>
> Key: SOLR-12908
> URL: https://issues.apache.org/jira/browse/SOLR-12908
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> Similar to SOLR-12845 where we want to add a set of default cluster policies, 
> we should add some default cluster preferences as well
>  
> We should always be trying to minimze cores , maximize freedisk for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12845) Add a default cluster policy

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12845:
-
Component/s: AutoScaling

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12908) Add a default set of cluster preferences

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12908:
-
Description: 
Similar to SOLR-12845 where we want to add a set of default cluster policies, 
we should add some default cluster preferences as well

 

We should always be trying to minimze cores , maximize freedisk for example.

  was:
Similar to SOLR-12845 where we want to add a set of default cluster policies, 
we should add some default cluster preferences as well

 

We should always be truing to minimze cores , maximize freedisk for example.


> Add a default set of cluster preferences
> 
>
> Key: SOLR-12908
> URL: https://issues.apache.org/jira/browse/SOLR-12908
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> Similar to SOLR-12845 where we want to add a set of default cluster policies, 
> we should add some default cluster preferences as well
>  
> We should always be trying to minimze cores , maximize freedisk for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12826) Add a default policy to equally distribute replicas of a shard

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12826.
--
Resolution: Won't Fix

This work will be superceded by SOLR-12845

> Add a default policy to equally distribute replicas of a shard
> --
>
> Key: SOLR-12826
> URL: https://issues.apache.org/jira/browse/SOLR-12826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> We should have a policy to the effect of "maxShardsPerHost=1" by default.
> I created a 4 node cluster , created lots of collection on node1 and node2.
> Then used the suggestions to move replicas. I ended up with a scenario where 
> both replicas of a collection was on one node.
> So if we create a policy such that this can never happen as 
> maxShardsPerHost=1 was the default
> {code:java}
> {"replica": "<2", "shard" : "#EACH" , "node" :"#ANY", "strict" : "false" 
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12908) Add a default set of cluster preferences

2018-10-23 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12908:
-
Summary: Add a default set of cluster preferences  (was: Add a default set 
of cluster preseferences)

> Add a default set of cluster preferences
> 
>
> Key: SOLR-12908
> URL: https://issues.apache.org/jira/browse/SOLR-12908
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> Similar to SOLR-12845 where we want to add a set of default cluster policies, 
> we should add some default cluster preferences as well
>  
> We should always be truing to minimze cores , maximize freedisk for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12908) Add a default set of cluster preseferences

2018-10-23 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12908:


 Summary: Add a default set of cluster preseferences
 Key: SOLR-12908
 URL: https://issues.apache.org/jira/browse/SOLR-12908
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Similar to SOLR-12845 where we want to add a set of default cluster policies, 
we should add some default cluster preferences as well

 

We should always be truing to minimze cores , maximize freedisk for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12845) Add a default cluster policy

2018-10-23 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661325#comment-16661325
 ] 

Varun Thacker edited comment on SOLR-12845 at 10/23/18 9:35 PM:


I can think of two more defaults ( both should be strict:false ) that we could 
add to the autoscaling.json file  :
 * Don't have more than two replicas on the same physical host :
{code:java}
{"replica": "<2", "host": “#ANY”}{code}

 * Define a well known system property called "rack" in Solr ( SOLR-12907 )  -
{code:java}
{"replica": "#EQUAL", "shard": "#EACH", "rack": “#EACH”}{code}


was (Author: varunthacker):
I can think of two more defaults ( both should be strict:false ) that we could 
add to the autoscaling.json file  :
 * Don't have more than two replicas on the same physical host : {{{"replica": 
"<2", "host": “#ANY”}}}
 * Define a well known system property called "rack" in Solr ( SOLR-12907 )  - 
{{{"replica": "#EQUAL", "shard": "#EACH", "rack": “#EACH”}}}

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2018-10-23 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661325#comment-16661325
 ] 

Varun Thacker commented on SOLR-12845:
--

I can think of two more defaults ( both should be strict:false ) that we could 
add to the autoscaling.json file  :
 * Don't have more than two replicas on the same physical host : {{{"replica": 
"<2", "host": “#ANY”}}}
 * Define a well known system property called "rack" in Solr ( SOLR-12907 )  - 
{{{"replica": "#EQUAL", "shard": "#EACH", "rack": “#EACH”}}}

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12907) Define a well known system property called rack for autoscaling policies

2018-10-23 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12907:


 Summary: Define a well known system property called rack for 
autoscaling policies
 Key: SOLR-12907
 URL: https://issues.apache.org/jira/browse/SOLR-12907
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


I want to setup a rule like to the effect of - Each shard should have their 
replicas distributed equally amongst availability zones

For achieveing this today I can create a rule like this
{code:java}
{"replica": "#EQUAL", "shard": "#EACH", "sysprop.az": “#EACH”}{code}
And then make sure that every solr jvm starts up with a system property called 
"az"

Another user might call the same property "availability_zone" and for some it's 
just a different "rack"

All of them want to achieve the same goal of redundancy

So if we have a well kown property called "rack" it would help standardize 
documentation , examples given out during talks etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12906) Fix all tests in org.apache.solr.search and begin a defense of them.

2018-10-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12906:
--

 Summary: Fix all tests in org.apache.solr.search and begin a 
defense of them.
 Key: SOLR-12906
 URL: https://issues.apache.org/jira/browse/SOLR-12906
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8540) Geo3d quantization test failure for MAX/MIN encoding values

2018-10-23 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661266#comment-16661266
 ] 

Lucene/Solr QA commented on LUCENE-8540:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} spatial3d in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945219/LUCENE-8540.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 3e89b7a |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/110/testReport/ |
| modules | C: lucene/spatial3d U: lucene/spatial3d |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/110/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Geo3d quantization test failure for MAX/MIN encoding values
> ---
>
> Key: LUCENE-8540
> URL: https://issues.apache.org/jira/browse/LUCENE-8540
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8540.patch
>
>
> Here is a reproducible error:
> {code:java}
> 08:45:21[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
> 08:45:21[junit4] IGNOR/A 0.01s J1 | TestGeo3DPoint.testRandomBig
> 08:45:21[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 08:45:21[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testQuantization 
> -Dtests.seed=4CB20CF248F6211 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ga-IE -Dtests.timezone=America/Bogota -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 08:45:21[junit4] ERROR   0.20s J1 | TestGeo3DPoint.testQuantization <<<
> 08:45:21[junit4]> Throwable #1: java.lang.IllegalArgumentException: 
> value=-1.0011188543037526 is out-of-bounds (less than than WGS84's 
> -planetMax=-1.0011188539924791)
> 08:45:21[junit4]> at 
> __randomizedtesting.SeedInfo.seed([4CB20CF248F6211:32220FD9326E7F33]:0)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.Geo3DUtil.encodeValue(Geo3DUtil.java:56)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testQuantization(TestGeo3DPoint.java:1228)
> 08:45:21[junit4]> at java.lang.Thread.run(Thread.java:748)
> 08:45:21[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{id=DocValuesFormat(name=Asserting), 
> point=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=659, 
> maxMBSortInHeap=6.225981846119071, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ga-IE, timezone=America/Bogota
> 08:45:21[junit4]   

[jira] [Updated] (SOLR-12885) BinaryResponseWriter (javabin format) should directly copy from Bytesref to output

2018-10-23 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12885:
--
Description: 
The format format in which bytes are stored in {{BytesRef}} and the javabin 
string format are both the same. We don't need to convert the string/text 
fields from {{BytesRef}} to String and back to UTF8 

{{Now a String/Text field is read and written out as follows}}

{{luceneindex(UTF8 bytes) --> UTF16 (char[]) --> new String() a copy of UTF16 
char[] -->  UTF8bytes(javabin format)}}

This does not add a new type to javabin. It's encoded as String in the 
serialized data. When it is deserialized, you get a String back

  was:
The format format in which bytes are stored in {{BytesRef}} and the javabin 
string format are both the same. We don't need to convert the string/text 
fields from {{BytesRef}} to String and back to UTF8 

{{Now a String/Text field is read and written out as follows}}

{{luceneindex(UTF8 bytes) --> UTF16 (char[]) --> new String() a copy of UTF16 
char[] -->  UTF8bytes(javabin format)}}


> BinaryResponseWriter (javabin format) should directly copy from Bytesref to 
> output
> --
>
> Key: SOLR-12885
> URL: https://issues.apache.org/jira/browse/SOLR-12885
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
> Attachments: SOLR-12885.patch
>
>
> The format format in which bytes are stored in {{BytesRef}} and the javabin 
> string format are both the same. We don't need to convert the string/text 
> fields from {{BytesRef}} to String and back to UTF8 
> {{Now a String/Text field is read and written out as follows}}
> {{luceneindex(UTF8 bytes) --> UTF16 (char[]) --> new String() a copy of UTF16 
> char[] -->  UTF8bytes(javabin format)}}
> This does not add a new type to javabin. It's encoded as String in the 
> serialized data. When it is deserialized, you get a String back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4889 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4889/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([6906DFBE0F2A6CD2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
min_hash

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: min_hash
at __randomizedtesting.SeedInfo.seed([6906DFBE0F2A6CD2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Comment Edited] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661087#comment-16661087
 ] 

Michael Gibney edited comment on LUCENE-8531 at 10/23/18 8:34 PM:
--

> I think we should keep the default behavior as is. You can still override 
> QueryBuilder#analyzeGraphPhrase to apply a different logic on your side if 
> you want.

Certainly agreed the default behavior should be left as-is. I'm content with 
the flexibility to override, but my suggestion was based on a sense that the 
desire to support {{inOrder=true}} could be a pretty common use case.

The API does specify "phrase", but with a lower-case "p", does this necessarily 
imply that exclusively {{PhraseQuery}} semantics _should_ be supported? It's 
the de facto case that {{PhraseQuery}} semantics _have been_ supported, so it 
definitely makes sense for that to continue to be the default – but I don't 
think it'd be unreasonable to add configurable stock support for 
{{inOrder=true}}. If such support were to be added, {{QueryBuilder}} would seem 
like a logical place to do it, and since the logic necessary to implement is 
already here (in {{analyzeGraphPhrase}}), it should be a trivial addition.

I'm thinking something along the lines of splitting the {{SpanNearQuery}} part 
of \{{analyzeGraphPhrase()}} everything after the "\{{if (phraseSlop > 0)}}" 
shortcircuit) into its own method. Even if split into a protected method, this 
would allow any override of {{analyzeGraphPhrase()}} to more cleanly leverage 
the existing logic for building {{SpanNearQuery}}.

I'm just explaining my thinking here; I guess the decision ultimately depends 
on how general a use case folks consider {{inOrder=true}} to be.


was (Author: mgibney):
> I think we should keep the default behavior as is. You can still override 
> QueryBuilder#analyzeGraphPhrase to apply a different logic on your side if 
> you want.

Certainly agreed the default behavior should be left as-is. I'm content with 
the flexibility to override, but my suggestion was based on a sense that the 
desire to support {{inOrder=true}} could be a pretty common use case.

The API does specify "phrase", but with a lower-case "p", does this necessarily 
imply that exclusively {{PhraseQuery}} semantics _should_ be supported? It's 
the de facto case that {{PhraseQuery}} semantics _have been_ supported, so it 
definitely makes sense for that to continue to be the default – but I don't 
think it'd be unreasonable to add configurable stock support for 
{{inOrder=true}}. If such support were to be added, {{QueryBuilder}} would seem 
like a logical place to do it, and since the logic necessary to implement is 
already here (in {{analyzeGraphPhrase}}), it should be a trivial addition.

I'm thinking something along the lines of splitting the {{SpanNearQuery}} part 
of {{analyzeGraphPhrase (}}everything after the "{{if (phraseSlop > 0)}}" 
shortcircuit) into its own method. Even if split into a protected method, this 
would allow any override of {{analyzeGraphPhrase}} to more cleanly leverage the 
existing logic for building {{SpanNearQuery}}.

I'm just explaining my thinking here; I guess the decision ultimately depends 
on how general a use case folks consider {{inOrder=true}} to be.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 195 - Still Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/195/

4 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:41483/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:41483/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([9698093204B43D8C:700F3DF23D36C4ED]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260)
at 
org.apache.solr.cloud.HttpPartitionTest.realTimeGetDocId(HttpPartitionTest.java:623)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:608)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:555)
at 
org.apache.solr.cloud.ForceLeaderTest.bringBackOldLeaderAndSendDoc(ForceLeaderTest.java:408)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2966 - Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2966/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

13 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test

Error Message:
Node 127.0.0.1:33117_solr has 7 replicas. Expected num replicas : 6. state:  
DocCollection(hdfsbackuprestore_restored//collections/hdfsbackuprestore_restored/state.json/42)={
   "pullReplicas":1,   "replicationFactor":2,   "shards":{ "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node122":{   
"core":"hdfsbackuprestore_restored_shard2_replica_n121",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node128":{   
"core":"hdfsbackuprestore_restored_shard2_replica_n127",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node130":{
   "core":"hdfsbackuprestore_restored_shard2_replica_t129",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"TLOG",   "force_set_state":"false"}, "core_node132":{   
"core":"hdfsbackuprestore_restored_shard2_replica_p131",   
"base_url":"http://127.0.0.1:41205/solr;,   
"node_name":"127.0.0.1:41205_solr",   "state":"active",   
"type":"PULL",   "force_set_state":"false"}},   
"stateTimestamp":"1540323164928001263"}, "shard1_1":{   
"range":"c000-",   "state":"active",   "replicas":{ 
"core_node124":{   
"core":"hdfsbackuprestore_restored_shard1_1_replica_n123",   
"base_url":"http://127.0.0.1:41205/solr;,   
"node_name":"127.0.0.1:41205_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node134":{   
"core":"hdfsbackuprestore_restored_shard1_1_replica_n133",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node136":{
   "core":"hdfsbackuprestore_restored_shard1_1_replica_t135",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"TLOG",   "force_set_state":"false"}, "core_node138":{   
"core":"hdfsbackuprestore_restored_shard1_1_replica_p137",   
"base_url":"http://127.0.0.1:41205/solr;,   
"node_name":"127.0.0.1:41205_solr",   "state":"active",   
"type":"PULL",   "force_set_state":"false"}},   
"stateTimestamp":"1540323164928040466"}, "shard1_0":{   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node126":{   
"core":"hdfsbackuprestore_restored_shard1_0_replica_n125",   
"base_url":"http://127.0.0.1:41205/solr;,   
"node_name":"127.0.0.1:41205_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node140":{   
"core":"hdfsbackuprestore_restored_shard1_0_replica_n139",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node142":{
   "core":"hdfsbackuprestore_restored_shard1_0_replica_t141",   
"base_url":"http://127.0.0.1:33117/solr;,   
"node_name":"127.0.0.1:33117_solr",   "state":"active",   
"type":"TLOG",   "force_set_state":"false"}, "core_node144":{   
"core":"hdfsbackuprestore_restored_shard1_0_replica_p143",   
"base_url":"http://127.0.0.1:41205/solr;,   
"node_name":"127.0.0.1:41205_solr",   "state":"active",   
"type":"PULL",   "force_set_state":"false"}},   
"stateTimestamp":"1540323164928067157"}},   "router":{"name":"compositeId"},   
"maxShardsPerNode":"-1",   "autoAddReplicas":"true",   "nrtReplicas":2,   
"tlogReplicas":1}

Stack Trace:
java.lang.AssertionError: Node 127.0.0.1:33117_solr has 7 replicas. Expected 
num replicas : 6. state: 
DocCollection(hdfsbackuprestore_restored//collections/hdfsbackuprestore_restored/state.json/42)={
  "pullReplicas":1,
  "replicationFactor":2,
  "shards":{
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node122":{
  

[jira] [Assigned] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.

2018-10-23 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-12497:


Assignee: Cassandra Targett

> Add ref guide docs for Hadoop Credential Provider based SSL/TLS store 
> password source.
> --
>
> Key: SOLR-12497
> URL: https://issues.apache.org/jira/browse/SOLR-12497
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Mano Kovacs
>Assignee: Cassandra Targett
>Priority: Minor
> Attachments: SOLR-12497.patch
>
>
> Document configuration added in SOLR-10783.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-12-ea+12) - Build # 848 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/848/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

14 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLProtocolException: Software caused connection abort: recv 
failed
at 
__randomizedtesting.SeedInfo.seed([D2BDF8A6CAEB9BA4:F01C9CDB6F87FDE1]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:126)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence(TestModelManagerPersistence.java:168)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[jira] [Updated] (SOLR-12905) reproducible MultiSolrCloudTestCaseTest test failure

2018-10-23 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12905:
---
Attachment: SOLR-12905.patch

> reproducible MultiSolrCloudTestCaseTest test failure
> 
>
> Key: SOLR-12905
> URL: https://issues.apache.org/jira/browse/SOLR-12905
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12905.patch
>
>
> We've seen a few of these in Jenkins via the dev list 
> https://lists.apache.org/list.html?dev@lucene.apache.org:lte=1y:%22duplicate%20clusterId%22
>  e.g.
> {code}
> FAILED: 
> junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest  
> Error Message: duplicate clusterId cloud1  Stack Trace: 
> java.lang.AssertionError: duplicate clusterId cloud1 at 
> __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0) at 
> org.junit.Assert.fail(Assert.java:93) at 
> org.junit.Assert.assertTrue(Assert.java:43) at 
> org.junit.Assert.assertFalse(Assert.java:68) at 
> org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93)
>  at 
> org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53)
> ...
> {code}
> With a big of digging I was able to reliably reproduce it by using 
> {{-Dtests.dups=N}} (which normally runs in multiple JVMs in parallel) 
> together with {{-Dtests.jvms=1}} constraint so that the tests actually run 
> sequentially in one JVM i.e. altogether
> {code}
> ant test -Dtests.dups=10 -Dtests.jvms=1 -Dtestcase=MultiSolrCloudTestCaseTest
> {code}
> The fix is simple i.e. the static {{clusterId2collection}} variable needs to 
> be cleared in @AfterClass as someone ([~janhoy] ?) already mentioned 
> somewhere elsewhere I think.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12905) reproducible MultiSolrCloudTestCaseTest test failure

2018-10-23 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12905:
--

 Summary: reproducible MultiSolrCloudTestCaseTest test failure
 Key: SOLR-12905
 URL: https://issues.apache.org/jira/browse/SOLR-12905
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke


We've seen a few of these in Jenkins via the dev list 
https://lists.apache.org/list.html?dev@lucene.apache.org:lte=1y:%22duplicate%20clusterId%22
 e.g.
{code}
FAILED: 
junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest  
Error Message: duplicate clusterId cloud1  Stack Trace: 
java.lang.AssertionError: duplicate clusterId cloud1 at 
__randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0) at 
org.junit.Assert.fail(Assert.java:93) at 
org.junit.Assert.assertTrue(Assert.java:43) at 
org.junit.Assert.assertFalse(Assert.java:68) at 
org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93)
 at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53)
...
{code}

With a big of digging I was able to reliably reproduce it by using 
{{-Dtests.dups=N}} (which normally runs in multiple JVMs in parallel) together 
with {{-Dtests.jvms=1}} constraint so that the tests actually run sequentially 
in one JVM i.e. altogether

{code}
ant test -Dtests.dups=10 -Dtests.jvms=1 -Dtestcase=MultiSolrCloudTestCaseTest
{code}

The fix is simple i.e. the static {{clusterId2collection}} variable needs to be 
cleared in @AfterClass as someone ([~janhoy] ?) already mentioned somewhere 
elsewhere I think.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661087#comment-16661087
 ] 

Michael Gibney commented on LUCENE-8531:


> I think we should keep the default behavior as is. You can still override 
> QueryBuilder#analyzeGraphPhrase to apply a different logic on your side if 
> you want.

Certainly agreed the default behavior should be left as-is. I'm content with 
the flexibility to override, but my suggestion was based on a sense that the 
desire to support {{inOrder=true}} could be a pretty common use case.

The API does specify "phrase", but with a lower-case "p", does this necessarily 
imply that exclusively {{PhraseQuery}} semantics _should_ be supported? It's 
the de facto case that {{PhraseQuery}} semantics _have been_ supported, so it 
definitely makes sense for that to continue to be the default – but I don't 
think it'd be unreasonable to add configurable stock support for 
{{inOrder=true}}. If such support were to be added, {{QueryBuilder}} would seem 
like a logical place to do it, and since the logic necessary to implement is 
already here (in {{analyzeGraphPhrase}}), it should be a trivial addition.

I'm thinking something along the lines of splitting the {{SpanNearQuery}} part 
of {{analyzeGraphPhrase (}}everything after the "{{if (phraseSlop > 0)}}" 
shortcircuit) into its own method. Even if split into a protected method, this 
would allow any override of {{analyzeGraphPhrase}} to more cleanly leverage the 
existing logic for building {{SpanNearQuery}}.

I'm just explaining my thinking here; I guess the decision ultimately depends 
on how general a use case folks consider {{inOrder=true}} to be.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter

2018-10-23 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12895:

Attachment: SOLR-12895.patch

> SurroundQParserPlugin support for UnifiedHighlighter
> 
>
> Key: SOLR-12895
> URL: https://issues.apache.org/jira/browse/SOLR-12895
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12895.patch, SOLR-12895.patch
>
>
> The "surround" QParser doesn't work with the UnififedHighlighter -- 
> LUCENE-8492.  However I think we can overcome this by having Solr's QParser 
> extend getHighlightQuery and rewrite itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter

2018-10-23 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661076#comment-16661076
 ] 

David Smiley commented on SOLR-12895:
-

Patch attached.  Hmmm; I see TestSurroundQueryParser tests highlighting.  I 
should enhance the test there instead of adding ones to highlighter tests.  
Both places are reasonable but the parser's test makes more sense to me.

> SurroundQParserPlugin support for UnifiedHighlighter
> 
>
> Key: SOLR-12895
> URL: https://issues.apache.org/jira/browse/SOLR-12895
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12895.patch
>
>
> The "surround" QParser doesn't work with the UnififedHighlighter -- 
> LUCENE-8492.  However I think we can overcome this by having Solr's QParser 
> extend getHighlightQuery and rewrite itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23081 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23081/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Error from server at https://127.0.0.1:38513/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:38513/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([BEF8FF38D864EC98]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/50)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   

[jira] [Updated] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter

2018-10-23 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12895:

Attachment: SOLR-12895.patch

> SurroundQParserPlugin support for UnifiedHighlighter
> 
>
> Key: SOLR-12895
> URL: https://issues.apache.org/jira/browse/SOLR-12895
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12895.patch
>
>
> The "surround" QParser doesn't work with the UnififedHighlighter -- 
> LUCENE-8492.  However I think we can overcome this by having Solr's QParser 
> extend getHighlightQuery and rewrite itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661054#comment-16661054
 ] 

Steve Rowe commented on LUCENE-8531:


Thanks for the explanation [~jim.ferenczi].


> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661023#comment-16661023
 ] 

ASF subversion and git services commented on SOLR-12754:


Commit ffabbaf1f2a34a29dd9416cfd84fbfe93b7ad227 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ffabbaf ]

SOLR-12754: hl.weightMatches should default to false in 7x.


> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661022#comment-16661022
 ] 

ASF subversion and git services commented on SOLR-12754:


Commit 1dd6ee520b48600aabc2b6dfaab5639c5d7db84d in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1dd6ee5 ]

SOLR-12754: New hl.weightMatches for UnifiedHighlighter WEIGHT_MATCHES
(defaults to true in master/8)

(cherry picked from commit 3e89b7a771639aacaed6c21406624a2b27231dd7)

# Conflicts:
#   solr/CHANGES.txt


> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661015#comment-16661015
 ] 

Jim Ferenczi commented on LUCENE-8531:
--

> Can you explain, or point to docs that explain what you mean?

I am referring to the javadoc of PhraseQuery#getSlop where it is explained how 
unordered terms could match:
{noformat}
* The slop is an edit distance between respective positions of terms as
* defined in this {@link PhraseQuery} and the positions of terms in a
* document.
*
* For instance, when searching for {@code "quick fox"}, it is expected that
* the difference between the positions of {@code fox} and {@code quick} is 1.
* So {@code "a quick brown fox"} would be at an edit distance of 1 since the
* difference of the positions of {@code fox} and {@code quick} is 2.
* Similarly, {@code "the fox is quick"} would be at an edit distance of 3
* since the difference of the positions of {@code fox} and {@code quick} is -2.
* The slop defines the maximum edit distance for a document to match.
*
* More exact matches are scored higher than sloppier matches, thus search
* results are sorted by exactness.
*/{noformat}
This is different than an unordered span near query which does not take the 
terms query order into account.

This is also what is explained in the description of the issue:
{noformat}
unlike with (Multi)PhraseQuery-s, reordering edits are not allowed, so this is 
a kind of regression. {noformat}
 

> That said, there surely are potential use cases for the {{inOrder=true}} 
> behavior, which is supported by {{SpanNearQuery}} but not by 
> ({{Multi)PhraseQuery}}. Would it be worth opening a new issue to consider 
> introducing the ability to specifically request construction of 
> {{SpanNearQuery}} and/or {{inOrder=true}}behavior? The work that went into 
> building {{SpanNearQuery}} for phrases (commit 
> [96e8f0a0afe|https://github.com/apache/lucene-solr/commit/96e8f0a0afeb68e2d07ec1dda362894f0b94333d])
>  is still useful and relevant, even if the result isn't backward-compatible 
> for the case where {{slop > 0}}.

 

I think it's something specific that can be handled in a custom QueryBuilder. 
The API specifically mentions that it builds a phrase so the default 
implementation should follow the semantic of a PhraseQuery. If we can optimize 
with a SpanNearQuery instead we need to ensure that it matches the same 
document than the multi phrase queries approach. That's not the case when slop 
is greater than 0 so I think we should keep the default behavior as is. You can 
still override QueryBuilder#analyzeGraphPhrase to apply a different logic on 
your side if you want.

 

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661011#comment-16661011
 ] 

ASF subversion and git services commented on SOLR-12754:


Commit 3e89b7a771639aacaed6c21406624a2b27231dd7 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e89b7a ]

SOLR-12754: New hl.weightMatches for UnifiedHighlighter WEIGHT_MATCHES
(defaults to true in master/8)


> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12661) Request with fl=[elevated] returns NullPointerException

2018-10-23 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660990#comment-16660990
 ] 

David Smiley commented on SOLR-12661:
-

Judging from the line number, it appears "obj" was null, and obj is the 
uniqueKey field value.  Thus the document had no uniqueKey.  The 
QueryElevationComponent requires a uniqueKey.

> Request with fl=[elevated] returns NullPointerException
> ---
>
> Key: SOLR-12661
> URL: https://issues.apache.org/jira/browse/SOLR-12661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Georgy Khotyan
>Priority: Major
>
> Request with fl=[elevated] returns NullPointerException when Solr 7.4 used. 
> It works with all older versions.
> Example: 
> [http://localhost:8983/solr/my-core/select?q=*:*=true=1,2,3=true=[elevated]
> Is it a bug of 7.4 version?
> Exception: 
> { "error":\\{ "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.BaseEditorialTransformer.getKey(BaseEditorialTransformer.java:72)\n\tat
>  
> org.apache.solr.response.transform.BaseEditorialTransformer.transform(BaseEditorialTransformer.java:52)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
>  
> 

[jira] [Commented] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2018-10-23 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660984#comment-16660984
 ] 

David Smiley commented on SOLR-11770:
-

Looking at this code now in BaseEditorialTransformer.  Shouldn't the String 
case be similar to IndexableField in using {{ft.readableToIndexed}}?  The field 
type dictates the transformation.

> NPE in tvrh if no field is specified and document doesn't contain any fields 
> with term vectors
> --
>
> Key: SOLR-11770
> URL: https://issues.apache.org/jira/browse/SOLR-11770
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 7.5, master (8.0)
>
> Attachments: SOLR-11770.patch, SOLR-11770.patch
>
>
> It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and 
> document doesn't have any fields with term vectors then Solr returns NPE.
> Request: 
> {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.
> On our 'old' schema we had some fields with {{termVectors}} and even more 
> fields with position data. In our new schema we tried to remove unused data 
> so we dropped a lot of position data and some term vectors.
> Our documents are 'sparsely' populated - not all documents contain all fields.
> Above request was returning fine for our 'old' schema and returns 500 for our 
> 'new' schema - on exactly same Solr (6.6.2).
> Stack trace:
> {code}
> 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
> r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at org.eclipse.jetty.server.Server.handle(Server.java:534)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>at 
> 

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-12-ea+12) - Build # 7580 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7580/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC

17 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([14A2EAC978D07928:4313AF72B82C8639]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[GitHub] lucene-solr issue #483: Log Delete Query Processor custom solr component

2018-10-23 Thread tirthmehta1994
Github user tirthmehta1994 commented on the issue:

https://github.com/apache/lucene-solr/pull/483
  
Hi @vthacker, this is the jira for Log Delete Query Processor: 
https://issues.apache.org/jira/browse/SOLR-12904


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12904) Log Delete Query Processor custom Solr component

2018-10-23 Thread Tirth Rajen Mehta (JIRA)
Tirth Rajen Mehta created SOLR-12904:


 Summary: Log Delete Query Processor custom Solr component
 Key: SOLR-12904
 URL: https://issues.apache.org/jira/browse/SOLR-12904
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tirth Rajen Mehta


Added a Log Delete Query Processor custom Solr component 
([https://github.com/apache/lucene-solr/pull/483)] :
 * Its mainly used to identify the delete queries and log them separately to 
identify the delete queries performed.
 * It helps to maintain track of any deletes going on in Solr.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #483: Log Delete Query Processor custom solr compon...

2018-10-23 Thread tirthmehta1994
GitHub user tirthmehta1994 opened a pull request:

https://github.com/apache/lucene-solr/pull/483

Log Delete Query Processor custom solr component

Log Delete Query Processor custom solr component

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/walmartlabs/lucene-solr 
logdeletequeryprocessor2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/483.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #483


commit 20720b7ffdc2deac9d31cd38afdec21cba3bc7fe
Author: tirthmehta1994 
Date:   2018-10-23T16:53:34Z

Log Delete Query Processor custom solr component




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 969 - Still Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/969/

2 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink

Error Message:
Error from server at https://127.0.0.1:44380: Could not find collection : 
shardSplitWithRule_link

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:44380: Could not find collection : 
shardSplitWithRule_link
at 
__randomizedtesting.SeedInfo.seed([CD1384F34453AF15:C70F3168CECFC9B0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitShardWithRule(ShardSplitTest.java:633)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink(ShardSplitTest.java:612)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[GitHub] lucene-solr pull request #479: Log Delete Query Processor custom component

2018-10-23 Thread tirthmehta1994
Github user tirthmehta1994 closed the pull request at:

https://github.com/apache/lucene-solr/pull/479


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #478: Query Source Tracker custom component

2018-10-23 Thread tirthmehta1994
Github user tirthmehta1994 commented on the issue:

https://github.com/apache/lucene-solr/pull/478
  
Hi @vthacker , I have created the issue here: 
https://issues.apache.org/jira/browse/SOLR-12903


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12903) Query Source Tracker custom Solr component

2018-10-23 Thread Tirth Rajen Mehta (JIRA)
Tirth Rajen Mehta created SOLR-12903:


 Summary: Query Source Tracker custom Solr component
 Key: SOLR-12903
 URL: https://issues.apache.org/jira/browse/SOLR-12903
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tirth Rajen Mehta


Added a Query Source Tracker custom Solr component 
(https://github.com/apache/lucene-solr/pull/478) :
 * This component can be configured for a RequestHandler for query requests.
 * This component mandates that clients to pass in a "qi" request parameter 
with a valid value which is configured in the SearchComponent definition in the 
solrconfig.xml file.
 * It fails the query if the "qi" parameter is missing or if the value passed 
in is in-valid. This behavior of failing the queries can be controlled by the 
failQueries config parameter.
 * It also collects the rate per sec metric per unique "qi" value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12902) Block Expensive Queries custom Solr component

2018-10-23 Thread Tirth Rajen Mehta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tirth Rajen Mehta updated SOLR-12902:
-
Description: 
Added a Block Expensive Queries custom Solr component 
([https://github.com/apache/lucene-solr/pull/477)] :
 * This search component can be plugged into your SearchHandler if you would 
like to block some well known expensive queries.
 * The queries that are blocked and failed by component currently are deep 
pagination queries as they are known to consume lot of memory and CPU. These 
are 

 * 
 ** queries with a start offset which is greater than the configured 
maxStartOffset config parameter value
 ** queries with a row param value which is greater than the configured 
maxRowsFetch config parameter value

  was:
Added a Block Expensive Queries custom Solr component:
 * This search component can be plugged into your SearchHandler if you would 
like to block some well known expensive queries.
 * The queries that are blocked and failed by component currently are deep 
pagination queries as they are known to consume lot of memory and CPU. These 
are 

 ** queries with a start offset which is greater than the configured 
maxStartOffset config parameter value
 ** queries with a row param value which is greater than the configured 
maxRowsFetch config parameter value


> Block Expensive Queries custom Solr component
> -
>
> Key: SOLR-12902
> URL: https://issues.apache.org/jira/browse/SOLR-12902
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tirth Rajen Mehta
>Priority: Minor
>
> Added a Block Expensive Queries custom Solr component 
> ([https://github.com/apache/lucene-solr/pull/477)] :
>  * This search component can be plugged into your SearchHandler if you would 
> like to block some well known expensive queries.
>  * The queries that are blocked and failed by component currently are deep 
> pagination queries as they are known to consume lot of memory and CPU. These 
> are 
>  * 
>  ** queries with a start offset which is greater than the configured 
> maxStartOffset config parameter value
>  ** queries with a row param value which is greater than the configured 
> maxRowsFetch config parameter value



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #477: Block Expensive Queries custom component

2018-10-23 Thread tirthmehta1994
Github user tirthmehta1994 commented on the issue:

https://github.com/apache/lucene-solr/pull/477
  
Sure @vthacker:
https://issues.apache.org/jira/browse/SOLR-12902


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12902) Block Expensive Queries custom Solr component

2018-10-23 Thread Tirth Rajen Mehta (JIRA)
Tirth Rajen Mehta created SOLR-12902:


 Summary: Block Expensive Queries custom Solr component
 Key: SOLR-12902
 URL: https://issues.apache.org/jira/browse/SOLR-12902
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tirth Rajen Mehta


Added a Block Expensive Queries custom Solr component:
 * This search component can be plugged into your SearchHandler if you would 
like to block some well known expensive queries.
 * The queries that are blocked and failed by component currently are deep 
pagination queries as they are known to consume lot of memory and CPU. These 
are 

 ** queries with a start offset which is greater than the configured 
maxStartOffset config parameter value
 ** queries with a row param value which is greater than the configured 
maxRowsFetch config parameter value



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style...

2018-10-23 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/482#discussion_r227471350
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/analysis/TestStopFilter.java ---
@@ -47,58 +48,65 @@ public void testStopFilt() throws IOException {
 assertTokenStreamContents(stream, new String[] { "Now", "The" });
   }
 
+
+  private void logStopwords(String name, List stopwords){
+// helper method: converts a list
+log(String.format("stopword list \"%s:\"", name));
+for (int i = 0; i < stopwords.size(); i++) {
+  log(String.format("stopword (%d): %s ", i, stopwords.get(i)));
+}
+log("--");
+  }
   /**
* Test Position increments applied by StopFilter with and without 
enabling this option.
*/
-  public void testStopPositons() throws IOException {
+  public void testStopPositions() throws IOException {
+final int NUMBER_OF_TOKENS = 20;
 StringBuilder sb = new StringBuilder();
-ArrayList a = new ArrayList<>();
-for (int i=0; i<20; i++) {
-  String w = English.intToEnglish(i).trim();
-  sb.append(w).append(" ");
-  if (i%3 != 0) a.add(w);
+List stopwords = new ArrayList<>(NUMBER_OF_TOKENS);
+for (int i = 0; i < NUMBER_OF_TOKENS; i++) {
+  String token = English.intToEnglish(i).trim();
+  sb.append(token).append(' ');
+  if (i%3 != 0) stopwords.add(token);
 }
 log(sb.toString());
-String stopWords[] = a.toArray(new String[0]);
-for (int i=0; i a0 = new ArrayList<>();
-ArrayList a1 = new ArrayList<>();
-for (int i=0; i evenStopwords = new ArrayList<>(stopwords.size());
+List oddStopwords = new ArrayList<>(stopwords.size());
+for (int i=0; i < stopwords.size(); i++) {
+  if (i%2 == 0) {
+evenStopwords.add(stopwords.get(i));
   } else {
-a1.add(a.get(i));
+oddStopwords.add(stopwords.get(i));
   }
 }
-String stopWords0[] =  a0.toArray(new String[0]);
-for (int i=0; i

[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style...

2018-10-23 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/482#discussion_r227469439
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/analysis/TestStopFilter.java ---
@@ -47,58 +48,65 @@ public void testStopFilt() throws IOException {
 assertTokenStreamContents(stream, new String[] { "Now", "The" });
   }
 
+
+  private void logStopwords(String name, List stopwords){
+// helper method: converts a list
+log(String.format("stopword list \"%s:\"", name));
+for (int i = 0; i < stopwords.size(); i++) {
+  log(String.format("stopword (%d): %s ", i, stopwords.get(i)));
+}
+log("--");
+  }
   /**
* Test Position increments applied by StopFilter with and without 
enabling this option.
*/
-  public void testStopPositons() throws IOException {
+  public void testStopPositions() throws IOException {
+final int NUMBER_OF_TOKENS = 20;
 StringBuilder sb = new StringBuilder();
-ArrayList a = new ArrayList<>();
-for (int i=0; i<20; i++) {
-  String w = English.intToEnglish(i).trim();
-  sb.append(w).append(" ");
-  if (i%3 != 0) a.add(w);
+List stopwords = new ArrayList<>(NUMBER_OF_TOKENS);
+for (int i = 0; i < NUMBER_OF_TOKENS; i++) {
+  String token = English.intToEnglish(i).trim();
+  sb.append(token).append(' ');
+  if (i%3 != 0) stopwords.add(token);
 }
 log(sb.toString());
-String stopWords[] = a.toArray(new String[0]);
-for (int i=0; i a0 = new ArrayList<>();
-ArrayList a1 = new ArrayList<>();
-for (int i=0; i evenStopwords = new ArrayList<>(stopwords.size());
+List oddStopwords = new ArrayList<>(stopwords.size());
+for (int i=0; i < stopwords.size(); i++) {
+  if (i%2 == 0) {
+evenStopwords.add(stopwords.get(i));
   } else {
-a1.add(a.get(i));
+oddStopwords.add(stopwords.get(i));
   }
 }
-String stopWords0[] =  a0.toArray(new String[0]);
-for (int i=0; i

[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660913#comment-16660913
 ] 

Steve Rowe commented on LUCENE-8531:


+1, thanks [~jim.ferenczi].

bq. (Multi)PhraseQuery-s allows some reordering but the semantic is different 
from an unordered span near query.

Can you explain, or point to docs that explain what you mean?

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style...

2018-10-23 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/482#discussion_r227464824
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/analysis/TestStopFilter.java ---
@@ -111,20 +119,24 @@ public void testEndStopword() throws Exception {
   null);
   }
 
-  private void doTestStopPositons(StopFilter stpf) throws IOException {
-CharTermAttribute termAtt = stpf.getAttribute(CharTermAttribute.class);
-PositionIncrementAttribute posIncrAtt = 
stpf.getAttribute(PositionIncrementAttribute.class);
-stpf.reset();
-for (int i=0; i<20; i+=3) {
-  assertTrue(stpf.incrementToken());
-  log("Token "+i+": "+stpf);
-  String w = English.intToEnglish(i).trim();
-  assertEquals("expecting token "+i+" to be "+w,w,termAtt.toString());
-  assertEquals("all but first token must have position increment of 
3",i==0?1:3,posIncrAtt.getPositionIncrement());
+  private void doTestStopwordsPositions(StopFilter stopfilter) throws 
IOException {
+final int NUMBER_OF_TOKENS = 20;
+final int DELTA = 3;
--- End diff --

Given the fact that this was in the original code, and you just refactored, 
I don't like that much the fact that Delta=3 here is not a method parameter.
In fact I doubt this method is really usable if the stop filter you pass is 
not precisely "all but divisible by 3" stop filter.
I would make this method parametric to make it clearly related the 
numerical nature of the stop filter in input


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-23 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660880#comment-16660880
 ] 

Michael Gibney commented on LUCENE-8531:


I recognize that this was a bug (in that using {{SpanNearQuery}} with 
{{inOrder=true}} and {{slop > 0}} changed the behavior, rather than simply the 
implementation, of the built query).

That said, there surely are potential use cases for the {{inOrder=true}} 
behavior, which is supported by {{SpanNearQuery}} but not by 
({{Multi)PhraseQuery}}. Would it be worth opening a new issue to consider 
introducing the ability to specifically request construction of 
{{SpanNearQuery}} and/or {{inOrder=true}} behavior? The work that went into 
building {{SpanNearQuery}} for phrases (commit 
[96e8f0a0afe|https://github.com/apache/lucene-solr/commit/96e8f0a0afeb68e2d07ec1dda362894f0b94333d])
 is still useful and relevant, even if the result isn't backward-compatible for 
the case where {{slop > 0}}.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12901) Make UnifiedHighlighter the default in 8.0

2018-10-23 Thread David Smiley (JIRA)
David Smiley created SOLR-12901:
---

 Summary: Make UnifiedHighlighter the default in 8.0
 Key: SOLR-12901
 URL: https://issues.apache.org/jira/browse/SOLR-12901
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
 Fix For: master (8.0)


I think the UnifiedHighlighter should be the default in 8.0.  It's faster and 
more accurate than alternatives.

The original highlighter however has some benefits:
* Different passage/snippet delineation options; somewhat more flexible.  
Though no i18n BreakIterator based one.
* Seems to handle some "special" Queries and/or QueryParsers by default better 
-- namely SurroundQParser.  Though SOLR-12895 will address this UH issue.
* Considers boosts in the query when computing a passage score
* hl.alternateField, hl.maxAlternateFieldLength, hl.highlightAlternate options. 
 Instead the UH has hl.defaultSummary boolean

See 
https://builds.apache.org/job/Solr-reference-guide-master/javadoc/highlighting.html




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style...

2018-10-23 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/482#discussion_r227462125
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/analysis/TestStopFilter.java ---
@@ -47,58 +48,65 @@ public void testStopFilt() throws IOException {
 assertTokenStreamContents(stream, new String[] { "Now", "The" });
   }
 
+
+  private void logStopwords(String name, List stopwords){
+// helper method: converts a list
+log(String.format("stopword list \"%s:\"", name));
+for (int i = 0; i < stopwords.size(); i++) {
+  log(String.format("stopword (%d): %s ", i, stopwords.get(i)));
+}
+log("--");
+  }
   /**
* Test Position increments applied by StopFilter with and without 
enabling this option.
*/
-  public void testStopPositons() throws IOException {
+  public void testStopPositions() throws IOException {
+final int NUMBER_OF_TOKENS = 20;
 StringBuilder sb = new StringBuilder();
--- End diff --

Maybe renaming sb -> inputText ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style...

2018-10-23 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/482#discussion_r227461971
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/analysis/TestStopFilter.java ---
@@ -47,58 +48,65 @@ public void testStopFilt() throws IOException {
 assertTokenStreamContents(stream, new String[] { "Now", "The" });
   }
 
+
+  private void logStopwords(String name, List stopwords){
+// helper method: converts a list
+log(String.format("stopword list \"%s:\"", name));
+for (int i = 0; i < stopwords.size(); i++) {
+  log(String.format("stopword (%d): %s ", i, stopwords.get(i)));
+}
+log("--");
+  }
   /**
* Test Position increments applied by StopFilter with and without 
enabling this option.
*/
-  public void testStopPositons() throws IOException {
+  public void testStopPositions() throws IOException {
+final int NUMBER_OF_TOKENS = 20;
 StringBuilder sb = new StringBuilder();
-ArrayList a = new ArrayList<>();
-for (int i=0; i<20; i++) {
-  String w = English.intToEnglish(i).trim();
-  sb.append(w).append(" ");
-  if (i%3 != 0) a.add(w);
+List stopwords = new ArrayList<>(NUMBER_OF_TOKENS);
--- End diff --

Maybe renaming sb -> inputText ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1759 - Unstable

2018-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1759/

[...truncated 35 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/968/consoleText

[repro] Revision: 97f6e23ff26e43b2f5b9412c4a01629737a92e43

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaTest -Dtests.method=test 
-Dtests.seed=A336EF25B0857371 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=mk-MK -Dtests.timezone=Africa/Dakar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitShardWithRule -Dtests.seed=A336EF25B0857371 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ms 
-Dtests.timezone=Africa/Freetown -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
2e757f6c257687ab713f88b6a07cf4a355e4cf66
[repro] git fetch
[repro] git checkout 97f6e23ff26e43b2f5b9412c4a01629737a92e43

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   MoveReplicaTest
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated 3436 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.MoveReplicaTest|*.ShardSplitTest" -Dtests.showOutput=onerror  
-Dtests.seed=A336EF25B0857371 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=mk-MK -Dtests.timezone=Africa/Dakar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 85074 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.MoveReplicaTest
[repro]   1/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro] git checkout 2e757f6c257687ab713f88b6a07cf4a355e4cf66

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12853) Add ability to set CreateNodeList.shuffle parameter in Create collection requests

2018-10-23 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660806#comment-16660806
 ] 

Benedict commented on SOLR-12853:
-

Any chance someone could take a look at this?

> Add ability to set CreateNodeList.shuffle parameter in Create collection 
> requests
> -
>
> Key: SOLR-12853
> URL: https://issues.apache.org/jira/browse/SOLR-12853
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Benedict
>Priority: Trivial
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create 
> collection requests, even though Solr's API supports this functionality. This 
> parameter is already supported in the Restore collection request, so the fix 
> is simple.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10841) TestTlogReplica.testRecovery sometimes fails

2018-10-23 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10841:
---
Attachment: SOLR-10841.patch

> TestTlogReplica.testRecovery sometimes fails
> 
>
> Key: SOLR-10841
> URL: https://issues.apache.org/jira/browse/SOLR-10841
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 4053.consoleFull, SOLR-10841.patch, SOLR-10841.patch
>
>
> I wasn't able to reproduce this locally, but I've seen it in Jenkins
> {noformat}
> Stack Trace:
> java.lang.AssertionError: Can not find doc 8 in https://127.0.0.1:65454/solr
> at 
> __randomizedtesting.SeedInfo.seed([9D2C5FBED6C5A94C:5CDC2612FB9563EB]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertNotNull(Assert.java:526)
> at 
> org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:868)
> at 
> org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:589)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> 

[jira] [Commented] (SOLR-10841) TestTlogReplica.testRecovery sometimes fails

2018-10-23 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660766#comment-16660766
 ] 

Christine Poerschke commented on SOLR-10841:


We are still seeing this now and again in Jenkins e.g. 
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23080/testReport/junit/org.apache.solr.cloud/TestTlogReplica/testRecovery/
 -- wondering if the number of retries could be increased e.g. from 3 to (say) 
5 and also to have an explicit "replication factor achieved" check before the 
{{checkRTG}}? Will attach proposed patch.

> TestTlogReplica.testRecovery sometimes fails
> 
>
> Key: SOLR-10841
> URL: https://issues.apache.org/jira/browse/SOLR-10841
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 4053.consoleFull, SOLR-10841.patch
>
>
> I wasn't able to reproduce this locally, but I've seen it in Jenkins
> {noformat}
> Stack Trace:
> java.lang.AssertionError: Can not find doc 8 in https://127.0.0.1:65454/solr
> at 
> __randomizedtesting.SeedInfo.seed([9D2C5FBED6C5A94C:5CDC2612FB9563EB]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertNotNull(Assert.java:526)
> at 
> org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:868)
> at 
> org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:589)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23080 - Still Unstable!

2018-10-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23080/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

22 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([A2ABF2ACF474BF68]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([A2ABF2ACF474BF68]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:158)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4919) Allow setting ResponseParser and RequestWriter on LBHttpSolrServer

2018-10-23 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660692#comment-16660692
 ] 

Jason Gerlowski commented on SOLR-4919:
---

Modern LBHttpSolrClient has a {{setParser}} and a {{setRequestWriter}} method 
to achieve this, unless I miss the point of this JIRA.  I imagine this can be 
closed.  Will close in a few days unless anyone corrects me.

> Allow setting ResponseParser and RequestWriter on LBHttpSolrServer
> --
>
> Key: SOLR-4919
> URL: https://issues.apache.org/jira/browse/SOLR-4919
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.3
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-4919.patch, SOLR-4919.patch, SOLR-4919.patch, 
> SOLR-4919.patch, SOLR-4919.patch, SOLR-4919.patch, SOLR-4919.patch, 
> SOLR-4919.patch, SOLR-4919.patch, SOLR-4919.patch, 
> SolrExampleJettyTest-testfail.txt, 
> SolrExampleStreamingTest-failure-linux.txt, 
> TestReplicationHandler-testfail.txt
>
>
> Patch to allow setting parser/writer on LBHttpSolrServer.  Will only work if 
> no server objects exist within.  Part of larger issue SOLR-4715.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12840) Add pairSort Stream Evaluator

2018-10-23 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660689#comment-16660689
 ] 

Joel Bernstein commented on SOLR-12840:
---

The commits have not appeared on this ticket so I'll include them below:

 

https://github.com/apache/lucene-solr/commit/6a702ee16bf1b3bf2fda9509956c609b751b2c35

> Add pairSort Stream Evaluator
> -
>
> Key: SOLR-12840
> URL: https://issues.apache.org/jira/browse/SOLR-12840
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12840.patch
>
>
> The *pairSort* Stream evaluator takes two paired arrays of numeric data and 
> sorts both arrays based on the natural sort order of the first array. The 
> pairSort function returns a matrix with two rows containing the pair sorted 
> arrays. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-23 Thread Andy Hind (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Hind updated SOLR-12879:
-
Attachment: minhash.filter.adoc.fragment

> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.filter.adoc.fragment, minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-23 Thread Andy Hind (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660681#comment-16660681
 ] 

Andy Hind commented on SOLR-12879:
--

MinHash Filter doc ...

 

{quote}

== MinHash Filter

Generates a repeatably random fixed number of hash tokens from all the input 
tokens in the stream.
To do this it first consumes all of the input tokens from its source.
This filter would normally be preceded by a <>, as shown in the 
example below.

Each input token is hashed. It is subsequently "rehashed" `hashCount` times by 
combining with a set of precomputed hashes.
For each of the resulting hashes, the hash space is divided in to `bucketCount` 
buckets. The lowest set of `hashSetSize` hashes (usually a set of one)
is generated for each bucket.

This filter generates one type of signature or sketch for the input tokens and 
can be used to compute Jaccard similarity between documents.


*Arguments:*

`hashCount`:: (integer) the number of hashes to use. The default is 1.

`bucketCount`:: (integer) the number of buckets to use. The default is 512.

`hashSetSize`:: (integer) the size of the set for the lowest hashes from each 
bucket. The default is 1.

`withRotation`:: (boolean) if a hash bucket is empty, generate a hash value 
from the first previous bucket that has a value.
 The default is true if the bucket count is greater than 1 and false otherwise.


The number of hashes generated depends on the options above. With the default 
settings for `withRotation`, the number of hashes geerated is
`hashCount` x `bucketCount` x `hashSetSize` => 512, by default.

*Example:*

[source,xml]


 
 
 
 



*In:* "woof woof woof woof woof"

*Tokenizer to Filter:* "woof woof woof woof woof"

*Out:* "℁팽徭聙↝ꇁ홱杯", "℁팽徭聙↝ꇁ홱杯", "℁팽徭聙↝ꇁ홱杯",  a total of 512 times

{quote]

 

 

> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-23 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12754:

Attachment: SOLR-12754.patch

> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch, SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12900) NPE in OneComparatorFieldValueHitQueue.lessThan while indexing

2018-10-23 Thread Tim Lebedkov (JIRA)
Tim Lebedkov created SOLR-12900:
---

 Summary: NPE in OneComparatorFieldValueHitQueue.lessThan while 
indexing
 Key: SOLR-12900
 URL: https://issues.apache.org/jira/browse/SOLR-12900
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Affects Versions: 6.6.1
Reporter: Tim Lebedkov


I get occasionally this NPE while searching and the Solr process finishes.

 

23-oct-2018 15:04:50 INFO: [Products] webapp=/solr path=/select 
params=\{q=*:*=1=0=true=ID,LAST_MODIFIED,INDEXED=230=1=ID+asc=10=index}
 hits=4598058 status=0 QTime=6958
23-oct-2018 15:04:56 SEVERE: java.lang.NullPointerException
 at 
org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.lessThan(FieldValueHitQueue.java:56)
 at org.apache.lucene.util.PriorityQueue.downHeap(PriorityQueue.java:284)
 at org.apache.lucene.util.PriorityQueue.pop(PriorityQueue.java:184)
 at 
org.apache.lucene.search.TopFieldCollector.populateResults(TopFieldCollector.java:425)
 at org.apache.lucene.search.TopDocsCollector.topDocs(TopDocsCollector.java:156)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1585)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1399)
 at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:566)
 at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:545)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
 at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:534)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >