[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateRequestProcessorFactory

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587738#comment-15587738
 ] 

ASF subversion and git services commented on SOLR-9657:
---

Commit c2e031add3d5db2c4e89a5a92afd7bb8cc1f481f in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c2e031a ]

SOLR-9657: New TemplateUpdateProcessorFactory added


> Create a new TemplateUpdateRequestProcessorFactory
> --
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9657) Create a new TemplateUpdateRequestProcessorFactory

2016-10-18 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9657:
-
Summary: Create a new TemplateUpdateRequestProcessorFactory  (was: Create a 
new TemplateUpdateProcessorFactory)

> Create a new TemplateUpdateRequestProcessorFactory
> --
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateProcessorFactory

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587696#comment-15587696
 ] 

Noble Paul commented on SOLR-9657:
--

The plan is to have automatic request parameter support for all URPs (wherever 
possible) 

> Create a new TemplateUpdateProcessorFactory
> ---
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateProcessorFactory

2016-10-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587655#comment-15587655
 ] 

Ishan Chattopadhyaya commented on SOLR-9657:


+1 to this URP (since it takes request parameters for fields).

> Create a new TemplateUpdateProcessorFactory
> ---
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9657) Create a new TemplateUpdateProcessorFactory

2016-10-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587518#comment-15587518
 ] 

David Smiley edited comment on SOLR-9657 at 10/19/16 3:22 AM:
--

This is nice; it'd come in handy for "lat,lon".  Can you add at least a 
one-liner javadoc to the class?  And I like that this can work off of Solr 
request parameters but why doesn't it _also_ work like all the other ones work 
-- by predefined configuration in solrconfig.xml?  I wonder if it's feasible 
for the URP processing subsystem to be refactored such that *all* URPs could 
operate in both modes, similarly to how request handlers can be.  It'd be great 
to not have this inconsistency.


was (Author: dsmiley):
This is nice; it'd come in handle for "lat,lon".  Can you add at least a 
one-liner javadoc to the class?  And I like that this can work off of Solr 
request parameters but why doesn't it _also_ work like all the other ones work 
-- by predefined configuration in solrconfig.xml?  I wonder if it's feasible 
for the URP processing subsystem to be refactored such that *all* URPs could 
operate in both modes, similarly to how request handlers can be.  It'd be great 
to not have this inconsistency.

> Create a new TemplateUpdateProcessorFactory
> ---
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateProcessorFactory

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587525#comment-15587525
 ] 

Noble Paul commented on SOLR-9657:
--

bq.I wonder if it's feasible for the URP processing subsystem to be refactored 
such that all URPs could operate in both modes, similarly to how request 
handlers can be

That's the plan

> Create a new TemplateUpdateProcessorFactory
> ---
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateProcessorFactory

2016-10-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587518#comment-15587518
 ] 

David Smiley commented on SOLR-9657:


This is nice; it'd come in handle for "lat,lon".  Can you add at least a 
one-liner javadoc to the class?  And I like that this can work off of Solr 
request parameters but why doesn't it _also_ work like all the other ones work 
-- by predefined configuration in solrconfig.xml?  I wonder if it's feasible 
for the URP processing subsystem to be refactored such that *all* URPs could 
operate in both modes, similarly to how request handlers can be.  It'd be great 
to not have this inconsistency.

> Create a new TemplateUpdateProcessorFactory
> ---
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1983 - Still Unstable!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1983/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([3AE0FEECAF9B0993:B2B4C1360167646B]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:412)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:106)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:78)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-10-18 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587430#comment-15587430
 ] 

Christine Poerschke commented on SOLR-8542:
---

And another quick note for the log here to say that i have snapshot the 
[updated pull request|https://github.com/apache/lucene-solr/pull/40] to 
https://github.com/apache/lucene-solr/tree/jira/solr-8542-v2 branch and updated 
the LEGAL-276 ticket re: the thus changed understanding as far as any potential 
patent concerns go.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2016-10-18 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587251#comment-15587251
 ] 

Greg Pendlebury commented on SOLR-8016:
---

Not that I am aware of. I can see the problem still in our newest server 
(5.5.3). I like [~markrmil...@gmail.com]'s suggestion of lowering the log level 
to info. It is simple and we can filter it out via logging config. The deeper 
issues of whether the retry should even be attempted sound interesting to me, 
but I'd be happy to just not see the log entries.

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, 6.0
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 162 - Failure

2016-10-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/162/

No tests ran.

Build Log:
[...truncated 40545 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (17.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.3.0-src.tgz...
   [smoker] 30.1 MB in 0.03 sec (967.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.3.0.tgz...
   [smoker] 64.7 MB in 0.05 sec (1202.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.3.0.zip...
   [smoker] 75.4 MB in 0.06 sec (1195.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6106 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6106 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (44.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.3.0-src.tgz...
   [smoker] 39.5 MB in 0.04 sec (1107.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.3.0.tgz...
   [smoker] 139.1 MB in 0.13 sec (1108.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.3.0.zip...
   [smoker] 148.1 MB in 0.13 sec (1164.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.3.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.3.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  
   [smoker] Started Solr server on port 

[jira] [Commented] (LUCENE-7462) Faster search APIs for doc values

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587192#comment-15587192
 ] 

Yonik Seeley commented on LUCENE-7462:
--

bq. Wouldn't this mean we'd need 2X the search-time code [...]

If there were a utility to always get you a random access API?  Perhaps not.
It does seem like a majority of consumers would want the random access API 
only... things like grouping, sorting, and faceting are all driven off of 
document ids.   For each ID, we check the docvalues.  We don't actually do 
skipping/leapfrogging like a filter would do since we still need to do work for 
each document, even if the DV doesn't exist for that document.

I haven't thought about what this means for code further down the stack, but it 
does seem worth exploring in general.

> Faster search APIs for doc values
> -
>
> Key: LUCENE-7462
> URL: https://issues.apache.org/jira/browse/LUCENE-7462
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0)
>Reporter: Adrien Grand
>Priority: Minor
>
> While the iterator API helps deal with sparse doc values more efficiently, it 
> also makes search-time operations more costly. For instance, the old 
> random-access API allowed to compute facets on a given segment without any 
> conditionals, by just incrementing the counter at index {{ordinal+1}} while 
> the new API requires to advance the iterator if necessary and then check 
> whether it is exactly on the right document or not.
> Since it is very common for fields to exist across most documents, I suspect 
> codecs will keep an internal structure that is similar to the current codec 
> in the dense case, by having a dense representation of the data and just 
> making the iterator skip over the minority of documents that do not have a 
> value.
> I suggest that we add APIs that make things cheaper at search time. For 
> instance in the case of SORTED doc values, it could look like 
> {{LegacySortedDocValues}} with the additional restriction that documents can 
> only be consumed in order. Codecs that can implement this API efficiently 
> would hide it behind a {{SortedDocValues}} adapter, and then at search time 
> facets and comparators (which liked the {{LegacySortedDocValues}} API better) 
> would either unwrap or hide the SortedDocValues they got behind a more 
> random-access API (which would only happen in the truly sparse case if the 
> codec optimizes the dense case).
> One challenge is that we already use the same idea for hiding single-valued 
> impls behind multi-valued impls, so we would need to enforce the order in 
> which the wrapping needs to happen. At first sight, it seems that it would be 
> best to do the single-value-behind-multi-value-API wrapping above the 
> random-access-behind-iterator-API wrapping. The complexity of 
> wrapping/unwrapping in the right order could be contained in the 
> {{DocValues}} helper class.
> I think this change would also simplify search-time consumption of doc 
> values, which currently needs to spend several lines of code positioning the 
> iterator everytime it needs to do something interesting with doc values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 1982 - Unstable!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1982/
Java: 32bit/jdk-9-ea+140 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test

Error Message:
List size mismatch @ spellcheck/suggestions

Stack Trace:
java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
at 
__randomizedtesting.SeedInfo.seed([454374B9A6CE2DDA:CD174B6308324022]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:901)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:848)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.test(SpellCheckComponentTest.java:147)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 11362 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-10-18 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587088#comment-15587088
 ] 

Shalin Shekhar Mangar commented on SOLR-9512:
-

bq. Case 6: do i understand it right that we would keep failing the indexing 
requests but 'only' until eventually the client manages to reconnect to zk?

Yes, that is correct.

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-18 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587012#comment-15587012
 ] 

Alexandre Rafalovitch commented on SOLR-8370:
-

Ok, I think my value as a sanity-checker is finished here :-) If those things 
are distinct in your mind, I have no problem with them being distinct in API or 
visually. And perhaps, then, I was incorrect about Global and it is Implicit 
instead. The specific reference says:

{noformat}
Each collection has one "global" Similarity, and by default Solr uses an 
implicit SchemaSimilarityFactory which allows individual field types to be 
configured with a "per-type" specific Similarity and implicitly uses 
BM25Similarity for any field type which does not have an explicit Similarity.
{noformat}

I have no opinion on the naming as such.

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> screenshot-1.png, screenshot-2.png, screenshot-3.png, screenshot-4.png, 
> screenshot-5.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586997#comment-15586997
 ] 

Scott Blum commented on SOLR-9659:
--

I hear you.  On the other hand, this patch adds a significant amount of new 
code to Solr that is very difficult to reason about and mentally verify 
correctness of. :(


> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586971#comment-15586971
 ] 

Alan Woodward commented on SOLR-9659:
-

bq. this is really the perfect use case to start experimenting with an 
incremental move

I've started playing with this now, but what concerns me immediately is that 
there's no way in Curator to pass in an existing ZK client.  This means that 
we'd need to maintain two client connections for every SolrZkClient instance, 
which I can see being very complex to deal with.  What happens if we get a 
socket error on one of the connections, but not the other, for example?  What 
if we start adding more security?

Don't get me wrong, I think Curator is great, and it would be cool if we could 
start to use it.  And I definitely take on board the point that it has a lot 
more eyeballs than Solr's internals.  But I think an incremental cutover will 
be very hard, and this API is such an improvement over what we have currently 
that it's worth going ahead with for now.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9642) Refactor the core level snapshot cleanup mechanism to rely on Lucene

2016-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586964#comment-15586964
 ] 

ASF GitHub Bot commented on SOLR-9642:
--

Github user hgadre closed the pull request at:

https://github.com/apache/lucene-solr/pull/97


> Refactor the core level snapshot cleanup mechanism to rely on Lucene
> 
>
> Key: SOLR-9642
> URL: https://issues.apache.org/jira/browse/SOLR-9642
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>Assignee: Yonik Seeley
> Fix For: 6.3
>
>
> SOLR-9269 introduced a mechanism to create/delete snapshots for a Solr core 
> (using Lucene IndexDeletionPolicy). The current snapshot cleanup mechanism is 
> based on reference counting the index files shared between multiple segments. 
> Since this mechanism completely skips the Lucene APIs, it is not portable 
> (e.g. it doesn't work on 4.10.3 version).
> I propose an alternative implementation which relies exclusively on Lucene 
> IndexWriter (+ IndexDeletionPolicy) for cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #97: [SOLR-9642] Refactor the snapshot cleanup mech...

2016-10-18 Thread hgadre
Github user hgadre closed the pull request at:

https://github.com/apache/lucene-solr/pull/97


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Building a Solr cluster with Maven

2016-10-18 Thread Greg Pendlebury
Thank-you for the replies. Yesterday I finished our build script using the
ZIP via Nexus, but I'd still like to pursue some long-term improvements to
that process. In response to some of the feedback:

@David:
"Another option, depending on one's needs, is to pursue Docker..."

>> We had a member of the ops team doing a build somewhat similar to this
maybe 18-24 months ago. He was struggling with some of the issues around
inter-shard communication because nodes write their own addresses into
clusterstate, but the inner docker applications didn't know their external
addresses. He was working through all those problems, but the solutions
were undermining the reasons he chose docker in the first place (letting
the external details bleed into the container). He ultimately backed away
from it all mainly because the rest of the ops team didn't like the overall
approach and the (perceived?) additional complexity it added to our
deployments.

"Does the scenario you wish to use the assets for relate to testing or some
other use-case?"

>> We run the dashboard on all production hosts... which is probably
redundant, but does come in handy, but beyond the dashboard there are a
couple of files in the same src area (eg. web.xml) that we need to tie
together Jetty and the solr-core classes. The main reason for automating it
is the scaled side of things. Our current cluster (5.1.0... not using this
build) is 60 shards and 2 replicas (120 JVMs) across 12 hosts, We configure
it all so that we can place the distribution of nodes evenly to control SSD
utilisation and CPU loads etc, as well as making sure maintenance
procedures are accounted for (such that X number of hosts can be down for
maintenance and at least one replica is always fully online). The build I
am overhauling right now is for a new cluster coming online later this
year. 96 shards, 2 replicas (184 JVMs) across 16 servers (maybe... we will
test and massage that topology before launch). We aren't particularly
looking forward to manually building/configuring 300 odd JVMs (or 28 server
deployments) every time we bug fix a plugin or do a minor version bump on
Solr, so these scripts are important.

The ops team also wants to make the distribution more tightly controlled to
solve issues they see in production where replication distribution can
sometimes see one host be too strongly a mirror of another host (ie. one
host has all of its replicas on one other host, rather than spread out
through the cluster). This means when that host crashes and comes back
online in recovery it stresses the other host incredibly, rather that
distributing the replication load around the cluster. The added complexity
this new layout brings is something we can solve (have solved... although
it untested at the moment) by scripting the build of the whole cluster.

The developers have always used these scripts to build our single-host
Devel environments because we routinely purge them and start again. We
would then package up the server part for the ops team to use in higher
environments which they augmented with puppet to build all the shards...
but the ops team want to start using Maven to build the whole lot now.
There are deeper parts of this that relate to some in-house tooling which
works very well with Maven and Jetty... but they are not of interest to
anyone that doesn't work here :)

@Keith:
"I haven't upstreamed the changes for the ant tasks thinking there wouldn't
be too much interest in that"

>> This is highly opinionated, but I suspect I would agree with you. I
don't think having the ZIP go into Maven Central is a good idea (if it is
even allowed). I felt bad putting it into our local Nexus repo (it is the
largest artifact in there now), but it got the job done. I avoided the
temptation to use it as a complete distro however. I've setup my main build
to only source things from that ZIP if they are not in the other Maven
artifacts (ie. just the webapp assets), so that if they become available
somewhere else I only have to modify a small part of the build.

@Tim:
"...it might be helpful to have a lib or "plugin-ins" folder in the zip
that is by default loaded to the classpath as an extension point for users
who are re-building the package?"

>> I agree. We use our own control scripts, but a colleague suggested the
same thing to me Yesterday because the ops team's first fumbling
experiments with 5.5.3 and 6.2.1 had them manually unpacking the ZIP and
deploying our plugins on the classpath. Mistakes they were making in
keeping the locations and version numbers all in alignment between builds
is what led us back to Maven to control all this.


Ta,
Greg

On 19 October 2016 at 03:15, Timothy Rodriguez (BLOOMBERG/ 120 PARK) <
trodrigue...@bloomberg.net> wrote:

> That'd be a helpful step. I think it'd be even better if there was a way
> to generate somewhat customized versions of solr from the artifacts that
> are published already. Publishing the whole zip would be a start,
> 

[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586716#comment-15586716
 ] 

Jan Høydahl commented on SOLR-8370:
---

bq. They should be the same as far as I can tell.
I disagree. It is an important distinction whether 
{{SchemaSimilarityFactory(PerFieldSimilarityWrapper)}}, with a default of 
{{DFRSimilarityFactor}}, compared to if you explicitly set 
{{DFRSimilarityWrapper}} as the only global similarity without support for 
per-field. So the output is exactly as I wish :) That was why I think that 
"Default" was a reasonable lead text for what {{PerFieldSimilarityWrapper}} 
will use as default similarity for fields that don't have an explicit override.

One option I was thinking about is to make SchemaSimilarityFactory's 
PerFieldSimilarityFactory a named class instead of an anonymous inner class:
{code}
class SchemaFieldSimilarity extends PerFieldSimilarityWrapper { ... }
{code}
Then the print from the global sim would instead be printing the name of the 
Similarity, not the factory:
{code}
"similarity":{
  "className":"org.apache.solr.search.similarities.SchemaFieldSimilarity",
  "details":"SchemaFieldSimilarity. Default: DFR I(F)B3(900.0)"},
{code}
Naming of this class could be discussed. SolrPerFieldSimilarity, 
SchemaSimilarity, SchemaFieldSimilarity, PerSchemaFieldSimilarityWrapper

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> screenshot-1.png, screenshot-2.png, screenshot-3.png, screenshot-4.png, 
> screenshot-5.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7504) Explain of select that uses replace() throws exception

2016-10-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586712#comment-15586712
 ] 

Uwe Schindler commented on LUCENE-7504:
---

I can move the issue!

> Explain of select that uses replace() throws exception
> --
>
> Key: LUCENE-7504
> URL: https://issues.apache.org/jira/browse/LUCENE-7504
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2.1
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> ){code}
> produced this stack trace
> {code}
> ERROR (qtp1989972246-17) [c:hcdtest s:shard1 r:core_node1 
> x:hcdtest_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: 
> Unable to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 179 - Still unstable

2016-10-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/179/

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=7581, 
name=searcherExecutor-2770-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=7577, 
name=searcherExecutor-2772-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=7581, name=searcherExecutor-2770-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=7577, name=searcherExecutor-2772-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([59D1B9ADAFC26E80]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=7581, name=searcherExecutor-2770-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=7577, 
name=searcherExecutor-2772-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 

[jira] [Commented] (SOLR-9661) Explain of select that uses replace() throws exception

2016-10-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586661#comment-15586661
 ] 

Dennis Gove commented on SOLR-9661:
---

I haven't looked at the code but I can imagine how this would occur. I'll see 
if I can take a look in the next day or two.

> Explain of select that uses replace() throws exception
> --
>
> Key: SOLR-9661
> URL: https://issues.apache.org/jira/browse/SOLR-9661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> )
> {code}
> as a streaming expression produced this stack trace:
> {code}
> ERROR (qtp1989972246-17) [c:test s:shard1 r:core_node1 
> x:test_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: Unable 
> to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> 

[jira] [Created] (SOLR-9661) Explain of select that uses replace() throws exception

2016-10-18 Thread Gus Heck (JIRA)
Gus Heck created SOLR-9661:
--

 Summary: Explain of select that uses replace() throws exception
 Key: SOLR-9661
 URL: https://issues.apache.org/jira/browse/SOLR-9661
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Gus Heck


{code}
select(
   search(test, q="table:article ", fl="edge_id", sort="edge_id desc", rows=10),
   edge_id,
   replace(type,null, withValue="1")
)
{code}
as a streaming expression produced this stack trace:
{code}
ERROR (qtp1989972246-17) [c:test s:shard1 r:core_node1 x:test_shard1_replica1] 
o.a.s.s.HttpSolrCall null:java.io.IOException: Unable to find function name for 
class 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
at 
org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
at 
org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
at 
org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
at 
org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
at 
org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
at 
org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
at 
org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 

[jira] [Closed] (LUCENE-7504) Explain of select that uses replace() throws exception

2016-10-18 Thread Gus Heck (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck closed LUCENE-7504.

Resolution: Not A Bug

closing to reopen in solr

> Explain of select that uses replace() throws exception
> --
>
> Key: LUCENE-7504
> URL: https://issues.apache.org/jira/browse/LUCENE-7504
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2.1
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> ){code}
> produced this stack trace
> {code}
> ERROR (qtp1989972246-17) [c:hcdtest s:shard1 r:core_node1 
> x:hcdtest_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: 
> Unable to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 

[jira] [Commented] (LUCENE-7504) Explain of select that uses replace() throws exception

2016-10-18 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586630#comment-15586630
 ] 

Gus Heck commented on LUCENE-7504:
--

crap meant to pick solr, somehow got lucene sorry.

> Explain of select that uses replace() throws exception
> --
>
> Key: LUCENE-7504
> URL: https://issues.apache.org/jira/browse/LUCENE-7504
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2.1
>Reporter: Gus Heck
>
> {code}
> select(
>search(test, q="table:article ", fl="edge_id", sort="edge_id desc", 
> rows=10),
>edge_id,
>replace(type,null, withValue="1")
> ){code}
> produced this stack trace
> {code}
> ERROR (qtp1989972246-17) [c:hcdtest s:shard1 r:core_node1 
> x:hcdtest_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: 
> Unable to find function name for class 
> 'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
>   at 
> org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
>   at 
> org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
>   at 
> org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
>   at 
> org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
>   at 
> org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 

[jira] [Created] (LUCENE-7504) Explain of select that uses replace() throws exception

2016-10-18 Thread Gus Heck (JIRA)
Gus Heck created LUCENE-7504:


 Summary: Explain of select that uses replace() throws exception
 Key: LUCENE-7504
 URL: https://issues.apache.org/jira/browse/LUCENE-7504
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 6.2.1
Reporter: Gus Heck


{code}
select(
   search(test, q="table:article ", fl="edge_id", sort="edge_id desc", rows=10),
   edge_id,
   replace(type,null, withValue="1")
){code}

produced this stack trace
{code}
ERROR (qtp1989972246-17) [c:hcdtest s:shard1 r:core_node1 
x:hcdtest_shard1_replica1] o.a.s.s.HttpSolrCall null:java.io.IOException: 
Unable to find function name for class 
'org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation'
at 
org.apache.solr.client.solrj.io.stream.expr.StreamFactory.getFunctionName(StreamFactory.java:335)
at 
org.apache.solr.client.solrj.io.ops.ReplaceWithValueOperation.toExpression(ReplaceWithValueOperation.java:108)
at 
org.apache.solr.client.solrj.io.ops.ReplaceOperation.toExpression(ReplaceOperation.java:81)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExpression(SelectStream.java:148)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:164)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.ComplementStream.toExplanation(ComplementStream.java:132)
at 
org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.toExplanation(PushBackStream.java:56)
at 
org.apache.solr.client.solrj.io.stream.MergeStream.toExplanation(MergeStream.java:136)
at 
org.apache.solr.client.solrj.io.stream.HashJoinStream.toExplanation(HashJoinStream.java:174)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
at 
org.apache.solr.client.solrj.io.stream.RankStream.toExplanation(RankStream.java:142)
at 
org.apache.solr.client.solrj.io.stream.SelectStream.toExplanation(SelectStream.java:159)
at 
org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:205)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 

[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586530#comment-15586530
 ] 

Keith Laban commented on SOLR-9506:
---

How expensive would it be to check numDocs (#4 in yoniks comment earlier). I 
think this would be the most straightforward and understandable approach.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-18 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586512#comment-15586512
 ] 

Alexandre Rafalovitch commented on SOLR-8370:
-

I am not sure if I am applying the correct patch, but I am still getting 
strange behavior from Luke handler that propagates all the way to the UI. 
Specifically, using the last example from: 
https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements

The request 
http://localhost:8983/solr/techproducts/admin/luke?_=1476820097532=schema=json
 returns: 
* The global similarity as: 
{noformat}
"similarity":{
  
"className":"org.apache.solr.search.similarities.SchemaSimilarityFactory$1",
  "details":"SchemaSimilarity. Global: DFR I(F)B3(900.0)"},
{noformat}

* And the type similarity for text_dfr as:
{noformat}
"similarity":{
  "className":"org.apache.lucene.search.similarities.DFRSimilarity",
  "details":"DFR I(F)B3(900.0)"}},
{noformat}

They *should* be the same as far as I can tell. And, when I said "Global", I 
meant that we don't need to say anything at all in the API, as it is clear from 
just the level of the Similarity key. I meant to change the Angular UI to say 
"Global similarity" instead of just "Similarity".


> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> screenshot-1.png, screenshot-2.png, screenshot-3.png, screenshot-4.png, 
> screenshot-5.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7850) Move user customization out of solr.in.* scripts

2016-10-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586490#comment-15586490
 ] 

Jan Høydahl commented on SOLR-7850:
---

This is a nice improvement. Looks ok. Did not do actual testing.

You could also remove this part from solr.cmd, like you did in the linux script:
{code}
IF "!JAVA_MAJOR_VERSION!"=="7" (
...
)
{code}

> Move user customization out of solr.in.* scripts
> 
>
> Key: SOLR-7850
> URL: https://issues.apache.org/jira/browse/SOLR-7850
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: David Smiley
>Priority: Minor
> Fix For: 6.3
>
> Attachments: 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch, 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch
>
>
> I've seen a fair number of users customizing solr.in.* scripts to make 
> changes to their Solr installs.  I think the documentation suggests this, 
> though I haven't confirmed.
> One possible problem with this is that we might make changes in those scripts 
> which such a user would want in their setup, but if they replace the script 
> with the one in the new version, they will lose their customizations.
> I propose instead that we have the startup script look for and utilize a user 
> customization script, in a similar manner to linux init scripts that look for 
> /etc/default/packagename, but are able to function without it.  I'm not 
> entirely sure where the script should live or what it should be called.  One 
> idea is server/etc/userconfig.\{sh,cmd\} ... but I haven't put a lot of 
> thought into it yet.
> If the internal behavior of our scripts is largely replaced by a small java 
> app as detailed in SOLR-7043, then the same thing should apply there -- have 
> a config file for a user to specify settings, but work perfectly if that 
> config file is absent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18083 - Unstable!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18083/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.search.TestBooleanRewrites.testRandom

Error Message:
expected:<5719.87890625> but was:<5719.87841796875>

Stack Trace:
java.lang.AssertionError: expected:<5719.87890625> but was:<5719.87841796875>
at 
__randomizedtesting.SeedInfo.seed([30F5E4E9CD90796C:42B9C1E67CF0CF1F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestBooleanRewrites.assertEquals(TestBooleanRewrites.java:427)
at 
org.apache.lucene.search.TestBooleanRewrites.testRandom(TestBooleanRewrites.java:367)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 871 lines...]
   [junit4] Suite: org.apache.lucene.search.TestBooleanRewrites
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestBooleanRewrites 
-Dtests.method=testRandom -Dtests.seed=30F5E4E9CD90796C -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=rm-CH 

[jira] [Commented] (SOLR-8332) factor HttpShardHandler[Factory]'s url shuffling out into a ReplicaListTransformer class

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586485#comment-15586485
 ] 

Noble Paul commented on SOLR-8332:
--


{code:java}
interface ReplicaListTransformer {

  public void transform(List replicas);

  public void transformUrls(List shardUrls);

}

{code}

As a I look at the functionality what it should do is to choose a few replicas 
from the available list of replicas. Sometimes, it would like to make a choice 
based on some input from users. It *should not* deal with actual urls. it's the 
responsibility of Solr to map the urls to actual replicas

So, Lets have a much simpler interface

{code:java}
interface ReplicaFilter {
public List  filter(List allReplicas, SolrQueryRequest 
req);
}
{code}


> factor HttpShardHandler[Factory]'s url shuffling out into a 
> ReplicaListTransformer class
> 
>
> Key: SOLR-8332
> URL: https://issues.apache.org/jira/browse/SOLR-8332
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8332.patch, SOLR-8332.patch, SOLR-8332.patch
>
>
> Proposed patch against trunk to follow. No change in behaviour intended. This 
> would be as a step towards SOLR-6730.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586419#comment-15586419
 ] 

ASF subversion and git services commented on SOLR-9506:
---

Commit ffa5c4ba2c2d6fa6bb943a70196aad0058333fa2 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ffa5c4b ]

SOLR-9506: reverting the previous commit


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586402#comment-15586402
 ] 

Scott Blum commented on SOLR-9659:
--

+1 this is really the perfect use case to start experimenting with an 
incremental move.  I think long term that Curator is a really good idea.  I 
took a quick look at your patch and it makes me sad imagining the cycle of 
review, bug discovery, fix etc that would ultimately have to happen when 
there's already code that handles so much of that subtlety, including issues 
around re-entrant code, data races, threading, etc.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586224#comment-15586224
 ] 

David Smiley commented on SOLR-9659:


I very much like the idea of leveraging Curator at least a little bit.  As 
already indicated; perhaps Solr will never fully cut-over to that but it's not 
an all-or-nothing proposition.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586220#comment-15586220
 ] 

Alan Woodward commented on SOLR-9659:
-

Aha, hadn't spotted the *Cache objects.  Yes, I can see they're trying to do 
the same thing.  And I like the ability to pass in an Executor for running the 
callbacks as well; I'll extend the patch to add that ability.

My reservation about going directly to Curator would be that I don't think we 
want to be maintaining two different frameworks at the same time.  Instead, I'd 
suggest we gradually hide all the ZK interaction behind some nicer APIs in 
SolrZkClient, and then we can swap in Curator behind that single point.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 481 - Failure!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/481/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 65931 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1350679158
 [ecj-lint] Compiling 700 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1350679158
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 267)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 317)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorRandomCloud.java
 (at line 19)
 [ecj-lint] import javax.ws.rs.HEAD;
 [ecj-lint]
 [ecj-lint] The import javax.ws.rs.HEAD is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java
 (at line 146)
 [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory();
 [ecj-lint]  ^^^
 [ecj-lint] Resource leak: 'hdfsFactory' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java
 (at line 53)
 [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin();
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'basicAuth' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 163)
 [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : 
collection2;
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'client' is never closed
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 221)
 [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), 
e);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'client' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 185)
 [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 188)
 [ecj-lint] OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 192)
 [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 241)
 [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
HashDocSet(a,0,n);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 531)
 [ecj-lint] DocSet a = new BitDocSet(bs);
 [ecj-lint] 

[jira] [Commented] (LUCENE-7497) Cannot use boolean SHOULD queries with block join?

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586176#comment-15586176
 ] 

ASF subversion and git services commented on LUCENE-7497:
-

Commit b78f2219f45ca64c6a4b7261a87fae89477ec26f in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b78f221 ]

LUCENE-7497: add test case


> Cannot use boolean SHOULD queries with block join?
> --
>
> Key: LUCENE-7497
> URL: https://issues.apache.org/jira/browse/LUCENE-7497
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7497.patch
>
>
> I'm in the process of upgrading http://jirasearch.mikemccandless.com (based 
> on 4.10.x in production today!) to Lucene 6.x, but hit this tricky bug.
> When I run the new test case, I hit this:
> {noformat}
> 1) testBQShouldJoinedChild(org.apache.lucene.search.join.TestBlockJoin)
> java.lang.UnsupportedOperationException
>   at 
> __randomizedtesting.SeedInfo.seed([4D5C76211B3E41E1:48F4B8C556F02AB0]:0)
>   at org.apache.lucene.search.FakeScorer.getChildren(FakeScorer.java:60)
>   at 
> org.apache.lucene.search.join.ToParentBlockJoinCollector$1.setScorer(ToParentBlockJoinCollector.java:190)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:319)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>   at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>   at 
> org.apache.lucene.search.join.TestBlockJoin.testBQShouldJoinedChild(TestBlockJoin.java:233)
> {noformat}
> Not sure how to fix it ... it happens because jirasearch runs SHOULD queries 
> against the child docs (one child doc per jira comment) and parent docs text 
> fields (one child doc per jira issue).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7497) Cannot use boolean SHOULD queries with block join?

2016-10-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7497.

   Resolution: Invalid
Fix Version/s: 6.3
   master (7.0)

> Cannot use boolean SHOULD queries with block join?
> --
>
> Key: LUCENE-7497
> URL: https://issues.apache.org/jira/browse/LUCENE-7497
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7497.patch
>
>
> I'm in the process of upgrading http://jirasearch.mikemccandless.com (based 
> on 4.10.x in production today!) to Lucene 6.x, but hit this tricky bug.
> When I run the new test case, I hit this:
> {noformat}
> 1) testBQShouldJoinedChild(org.apache.lucene.search.join.TestBlockJoin)
> java.lang.UnsupportedOperationException
>   at 
> __randomizedtesting.SeedInfo.seed([4D5C76211B3E41E1:48F4B8C556F02AB0]:0)
>   at org.apache.lucene.search.FakeScorer.getChildren(FakeScorer.java:60)
>   at 
> org.apache.lucene.search.join.ToParentBlockJoinCollector$1.setScorer(ToParentBlockJoinCollector.java:190)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:319)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>   at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>   at 
> org.apache.lucene.search.join.TestBlockJoin.testBQShouldJoinedChild(TestBlockJoin.java:233)
> {noformat}
> Not sure how to fix it ... it happens because jirasearch runs SHOULD queries 
> against the child docs (one child doc per jira comment) and parent docs text 
> fields (one child doc per jira issue).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7497) Cannot use boolean SHOULD queries with block join?

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586170#comment-15586170
 ] 

ASF subversion and git services commented on LUCENE-7497:
-

Commit abbbdc866dd16c34714d48ee7bc4e754423e6039 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=abbbdc8 ]

LUCENE-7497: add test case


> Cannot use boolean SHOULD queries with block join?
> --
>
> Key: LUCENE-7497
> URL: https://issues.apache.org/jira/browse/LUCENE-7497
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-7497.patch
>
>
> I'm in the process of upgrading http://jirasearch.mikemccandless.com (based 
> on 4.10.x in production today!) to Lucene 6.x, but hit this tricky bug.
> When I run the new test case, I hit this:
> {noformat}
> 1) testBQShouldJoinedChild(org.apache.lucene.search.join.TestBlockJoin)
> java.lang.UnsupportedOperationException
>   at 
> __randomizedtesting.SeedInfo.seed([4D5C76211B3E41E1:48F4B8C556F02AB0]:0)
>   at org.apache.lucene.search.FakeScorer.getChildren(FakeScorer.java:60)
>   at 
> org.apache.lucene.search.join.ToParentBlockJoinCollector$1.setScorer(ToParentBlockJoinCollector.java:190)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at 
> org.apache.lucene.search.FilterLeafCollector.setScorer(FilterLeafCollector.java:38)
>   at 
> org.apache.lucene.search.AssertingLeafCollector.setScorer(AssertingLeafCollector.java:43)
>   at org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:319)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>   at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>   at 
> org.apache.lucene.search.join.TestBlockJoin.testBQShouldJoinedChild(TestBlockJoin.java:233)
> {noformat}
> Not sure how to fix it ... it happens because jirasearch runs SHOULD queries 
> against the child docs (one child doc per jira comment) and parent docs text 
> fields (one child doc per jira issue).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+140) - Build # 1980 - Unstable!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1980/
Java: 64bit/jdk-9-ea+140 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery

Error Message:
List size mismatch @ spellcheck/suggestions

Stack Trace:
java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
at 
__randomizedtesting.SeedInfo.seed([C3A3C50AAA903C45:C88F92CA355980EA]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:901)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:848)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery(SpellCheckComponentTest.java:154)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 11739 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586137#comment-15586137
 ] 

Keith Laban commented on SOLR-9659:
---

I've used the *Cache recipes Scott is talking about pretty extensively for 
various projects. It makes doing what you describe pretty trivial. No resetting 
watches, dealing with timing, dealing with client connections. 

Basically, 
1) Create a client
2) Create a PathChildrenCache or NodeCache for a path
3) Add a listener for cache changes
4) Start the cache

Everything else is maintained by Curator. Which has become a pretty battle 
tested piece of software.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9341) GC Logs on windows should go to Solr_Logs_Dir, rather than hardcoded to /server/logs/

2016-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586077#comment-15586077
 ] 

ASF GitHub Bot commented on SOLR-9341:
--

Github user afscrome commented on the issue:

https://github.com/apache/lucene-solr/pull/53
  
Fixed in 33db4de4d7d5e325f8bfd886d3957735b33310a8


> GC Logs on windows should go to Solr_Logs_Dir, rather than hardcoded to 
> /server/logs/
> -
>
> Key: SOLR-9341
> URL: https://issues.apache.org/jira/browse/SOLR-9341
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
> Environment: Windows
>Reporter: Alex Crome
>
> The windows batch script is hard coded to store the gc logs in server/logs, 
> whereas the batch start script stores them in $SOLR_LOGS_DIR



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9341) GC Logs on windows should go to Solr_Logs_Dir, rather than hardcoded to /server/logs/

2016-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586078#comment-15586078
 ] 

ASF GitHub Bot commented on SOLR-9341:
--

Github user afscrome closed the pull request at:

https://github.com/apache/lucene-solr/pull/53


> GC Logs on windows should go to Solr_Logs_Dir, rather than hardcoded to 
> /server/logs/
> -
>
> Key: SOLR-9341
> URL: https://issues.apache.org/jira/browse/SOLR-9341
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
> Environment: Windows
>Reporter: Alex Crome
>
> The windows batch script is hard coded to store the gc logs in server/logs, 
> whereas the batch start script stores them in $SOLR_LOGS_DIR



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #53: [SOLR-9341] GC logs go to SOLR_LOGS_DIR on Win...

2016-10-18 Thread afscrome
Github user afscrome closed the pull request at:

https://github.com/apache/lucene-solr/pull/53


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #53: [SOLR-9341] GC logs go to SOLR_LOGS_DIR on Windows

2016-10-18 Thread afscrome
Github user afscrome commented on the issue:

https://github.com/apache/lucene-solr/pull/53
  
Fixed in 33db4de4d7d5e325f8bfd886d3957735b33310a8


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586057#comment-15586057
 ] 

Noble Paul commented on SOLR-9658:
--

[~dragonsinth] I can't think of why reopening a searcher wouldn't clear caches. 
if we can isolate that into a testcase it will be great

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586055#comment-15586055
 ] 

Scott Blum commented on SOLR-9659:
--

Yeah, did you see Curator's NodeCache, PathChildrenCache, TreeCache?  They do 
basically exactly that, but they have a lot of mileage on them.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586047#comment-15586047
 ] 

Scott Blum commented on SOLR-9659:
--

Specifically, Curator handles some of these concepts:
1) Maintain a single set of watches with ZK, and dispatch into app code.
2) Handle disconnect / reconnect, notify watchers of connection events
3) Automatically re-establish watches on the new session on reconnect.


> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586042#comment-15586042
 ] 

Scott Blum commented on SOLR-9659:
--

This kind of use case is very much what Curator is supposed to solve.  I agree 
with the idea that a wholesale cutover to Curator is probably not a great a 
idea, especially for some of our very complicated custom recipes.  But for 
something like this, Curator is actually a really good fit.  I think it's worth 
experimenting for this use case.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586044#comment-15586044
 ] 

Alan Woodward commented on SOLR-9659:
-

I started out by looking at Curator, but this API ended up being at a higher 
level.  Here, clients don't need to care at all about Watcher objects, or ZK 
exceptions, or anything like that.  Instead, you just say "I'm interesting in 
path x/y/z - when the data there changes, call me with the new contents"; or 
"I'm interested in the children of path z/q - when the child list changes, call 
me with the new list".

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2016-10-18 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586028#comment-15586028
 ] 

Scott Blum commented on SOLR-9658:
--

That would be awesome, I believe that would work.

Related question: do you have insight into why doesn't auto soft commit 
wouldn't be clearing caches already?  Assuming I don't want auto warmed 
queries, I might naively think that an auto soft commit should have the effect 
of evicting caches since it would invalidate the results.  Do you know why that 
doesn't seem to happen?

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7503) Undeprecate o.o.l.util.LongValues

2016-10-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7503.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

> Undeprecate o.o.l.util.LongValues
> -
>
> Key: LUCENE-7503
> URL: https://issues.apache.org/jira/browse/LUCENE-7503
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7503.patch
>
>
> This is a follow-up to 
> http://search-lucene.com/m/l6pAi1iMlPb2wx51P=plan+for+getGlobalOrds+gt+LongValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7503) Undeprecate o.o.l.util.LongValues

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585968#comment-15585968
 ] 

ASF subversion and git services commented on LUCENE-7503:
-

Commit 3be6701f17d9a507e07e4a3f01bcfd702bdfc806 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3be6701 ]

LUCENE-7503: Undeprecate o.o.l.util.LongValues.


> Undeprecate o.o.l.util.LongValues
> -
>
> Key: LUCENE-7503
> URL: https://issues.apache.org/jira/browse/LUCENE-7503
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7503.patch
>
>
> This is a follow-up to 
> http://search-lucene.com/m/l6pAi1iMlPb2wx51P=plan+for+getGlobalOrds+gt+LongValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 526 - Failure!

2016-10-18 Thread Alan Woodward
Oops, fixed…

Alan Woodward
www.flax.co.uk


> On 18 Oct 2016, at 15:24, Policeman Jenkins Server  
> wrote:
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/526/
> Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC
> 
> All tests passed
> 
> Build Log:
> [...truncated 65996 lines...]
> -ecj-javadoc-lint-tests:
>[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj324144343
> [ecj-lint] Compiling 700 source files to 
> C:\Users\jenkins\AppData\Local\Temp\ecj324144343
> [ecj-lint] invalid Class-Path header in manifest of jar file: 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar
> [ecj-lint] invalid Class-Path header in manifest of jar file: 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
> [ecj-lint] --
> [ecj-lint] 1. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\OverseerTest.java
>  (at line 267)
> [ecj-lint]ZkStateReader reader = new ZkStateReader(zkClient);
> [ecj-lint]  ^^
> [ecj-lint] Resource leak: 'reader' is never closed
> [ecj-lint] --
> [ecj-lint] 2. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\OverseerTest.java
>  (at line 317)
> [ecj-lint]ZkStateReader reader = new ZkStateReader(zkClient);
> [ecj-lint]  ^^
> [ecj-lint] Resource leak: 'reader' is never closed
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 3. ERROR in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\TestTolerantUpdateProcessorRandomCloud.java
>  (at line 19)
> [ecj-lint]import javax.ws.rs.HEAD;
> [ecj-lint]   
> [ecj-lint] The import javax.ws.rs.HEAD is never used
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 4. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\core\HdfsDirectoryFactoryTest.java
>  (at line 146)
> [ecj-lint]HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory();
> [ecj-lint] ^^^
> [ecj-lint] Resource leak: 'hdfsFactory' is never closed
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 5. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\admin\SecurityConfHandlerTest.java
>  (at line 53)
> [ecj-lint]BasicAuthPlugin basicAuth = new BasicAuthPlugin();
> [ecj-lint]^
> [ecj-lint] Resource leak: 'basicAuth' is never closed
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 6. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\component\DistributedDebugComponentTest.java
>  (at line 163)
> [ecj-lint]SolrClient client = random().nextBoolean() ? collection1 : 
> collection2;
> [ecj-lint]   ^^
> [ecj-lint] Resource leak: 'client' is never closed
> [ecj-lint] --
> [ecj-lint] 7. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\component\DistributedDebugComponentTest.java
>  (at line 221)
> [ecj-lint]throw new AssertionError(q.toString() + ": " + e.getMessage(), 
> e);
> [ecj-lint]
> ^^
> [ecj-lint] Resource leak: 'client' is not closed at this location
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 8. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
>  (at line 185)
> [ecj-lint]Analyzer a1 = new WhitespaceAnalyzer();
> [ecj-lint] ^^
> [ecj-lint] Resource leak: 'a1' is never closed
> [ecj-lint] --
> [ecj-lint] 9. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
>  (at line 188)
> [ecj-lint]OffsetWindowTokenFilter tots = new 
> OffsetWindowTokenFilter(tokenStream);
> [ecj-lint]
> [ecj-lint] Resource leak: 'tots' is never closed
> [ecj-lint] --
> [ecj-lint] 10. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
>  (at line 192)
> [ecj-lint]Analyzer a2 = new WhitespaceAnalyzer();
> [ecj-lint] ^^
> [ecj-lint] Resource leak: 'a2' is never closed
> [ecj-lint] --
> [ecj-lint] --
> [ecj-lint] 11. WARNING in 
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\search\TestDocSet.java
>  (at line 241)
> [ecj-lint]return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
> HashDocSet(a,0,n);
> [ecj-lint]   ^^
> [ecj-lint] Resource leak: 

[jira] [Commented] (SOLR-9634) Deprecate collection methods on MiniSolrCloudCluster

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585945#comment-15585945
 ] 

ASF subversion and git services commented on SOLR-9634:
---

Commit 9ee84db6144ca84d909739fafc02f10b810806b3 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9ee84db ]

SOLR-9634: Fix precommit


> Deprecate collection methods on MiniSolrCloudCluster
> 
>
> Key: SOLR-9634
> URL: https://issues.apache.org/jira/browse/SOLR-9634
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9634.patch
>
>
> MiniSolrCloudCluster has a bunch of createCollection() and deleteCollection() 
> special methods, which aren't really necessary given that we expose a 
> solrClient.  We should deprecate these, and point users to the 
> CollectionAdminRequest API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-10-18 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585936#comment-15585936
 ] 

Hrishikesh Gadre commented on SOLR-5750:


[~TimOwen] [~dsmiley] I think this is not yet implemented (due to some unit 
test failure ?).

https://github.com/apache/lucene-solr/commit/70bcd562f98ede21dfc93a1ba002c61fac888b29#diff-e864a6be5b98b5340273c1db4f4677a6R107

I am not sure why this problem exists just for restore operation (and not for 
create).

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 6.1
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585924#comment-15585924
 ] 

Keith Laban commented on SOLR-9659:
---

I'm not implying a full cutover. But if we were to build a generic API for 
talking to zk and getting events we might be able to borrow some ideas from 
Curator. 

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585918#comment-15585918
 ] 

Yonik Seeley commented on SOLR-9506:


Please do.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-10-18 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585894#comment-15585894
 ] 

Christine Poerschke commented on SOLR-6203:
---

Hello [~nated] and [~Judith],

Thanks for reporting this issue in detail and attaching test case and code 
change patches.

I started looking at this last week but only now got around to writing comments 
here, apologies for the delay.

* The original SOLR-6203-unittest.patch still applies cleanly but running
{code}
cd solr/core
ant test -Dtestcase=DistributedQueryComponentCustomSortTest
{code}
gives
{code}
... unexpected docvalues type SORTED_SET for field 'text' (expected=SORTED). 
Re-index with correct docvalues type. ...
{code}
error. I have not looked into the details for that error but simply changing
{code}
- ... "group.field", "text" ...
+ ... "group.field", "id" ...
{code}
in the test patch again produces the {{java.lang.Double cannot be cast to 
org.apache.lucene.util.BytesRef}} exception.

* The original SOLR-6203.patch could no longer be applied cleanly, that is of 
course nothing to worry about though since the patch is over a year old by now.
** I have separated out some parts of the patch into the micro commits above 
and into the linked SOLR-9627 patch.
** Trying to graft everything from your patch onto the current master branch 
seemed to work at first but then tests were failing and so i backtracked, to 
what your README mentions as the first step i.e. storing SortSpecs rather than 
Sorts in GroupingSpecification. There was also the
bq. TODO eliminate GroupingSpec's (Group)Offset and (Group)Limit fields and get 
those values from its SortSpecs.
comment in your patch and i pulled that into scope for the SOLR-9660 sub-step 
because i think it will make the subsequent code changes here easier.
*** The tests for the SOLR-9660 sub-step are still failing, extra pairs of eyes 
and reviews of the patch are very welcome. The SOLR-9649 discovery is also 
unexpected and perhaps figuring out the latter will help with fixing the former.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3614 - Unstable!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3614/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:62196","node_name":"127.0.0.1:62196_","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:62217;,   
"core":"c8n_1x3_lf_shard1_replica1",   "node_name":"127.0.0.1:62217_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:62196;,   "node_name":"127.0.0.1:62196_",  
 "state":"active",   "leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:62204;,   "node_name":"127.0.0.1:62204_",  
 "state":"down",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:62196","node_name":"127.0.0.1:62196_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:62217;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:62217_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:62196;,
  "node_name":"127.0.0.1:62196_",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:62204;,
  "node_name":"127.0.0.1:62204_",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([6084025175B47FC8:E8D03D8BDB481230]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Commented] (SOLR-9659) Add zookeeper DataWatch API

2016-10-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585889#comment-15585889
 ] 

Erick Erickson commented on SOLR-9659:
--

Curator has been mentioned before, but IIRC the response was that it'd be a 
major undertaking to replace all the current ZK code with curator code. There's 
also been an enormous amount of work put into hardening the ZK code, don't know 
how much of that would need to be re-learned.

Not saying it's a bad idea, just that it would need some pretty careful 
evaluation before diving in.

> Add zookeeper DataWatch API
> ---
>
> Key: SOLR-9659
> URL: https://issues.apache.org/jira/browse/SOLR-9659
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9659.patch
>
>
> We have several components which need to set up watches on ZooKeeper nodes 
> for various aspects of cluster management.  At the moment, all of these 
> components do this themselves, leading to large amounts of duplicated code, 
> and complicated logic for dealing with reconnections, etc, scattered across 
> the codebase.  We should replace this with a simple API controlled by 
> SolrZkClient, which should make the code more robust, and testing 
> considerably easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-10-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-6203:
--
Attachment: SOLR-6203-unittest.patch

Attaching variant of the original unittest patch using "id" instead of "text" 
for the "group.field" and with diff taken from the top-level directory.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7503) Undeprecate o.o.l.util.LongValues

2016-10-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585880#comment-15585880
 ] 

Michael McCandless commented on LUCENE-7503:


+1, thanks [~jpountz]

> Undeprecate o.o.l.util.LongValues
> -
>
> Key: LUCENE-7503
> URL: https://issues.apache.org/jira/browse/LUCENE-7503
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7503.patch
>
>
> This is a follow-up to 
> http://search-lucene.com/m/l6pAi1iMlPb2wx51P=plan+for+getGlobalOrds+gt+LongValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Building a Solr cluster with Maven

2016-10-18 Thread Timothy Rodriguez (BLOOMBERG/ 120 PARK)
That'd be a helpful step.  I think it'd be even better if there was a way to 
generate somewhat customized versions of solr from the artifacts that are 
published already.  Publishing the whole zip would be a start, downstream 
builds could add logic to resolve it, explode, tweak, and re-publish.  The 
maintain the strict separation from the war, it might be helpful to have a lib 
or "plugin-ins" folder in the zip that is by default loaded to the classpath as 
an extension point for users who are re-building the package?

-Tim

From: dev@lucene.apache.org At: 10/18/16 09:52:42
To: dev@lucene.apache.org
Subject: Re: Building a Solr cluster with Maven

My team has modified the ant scripts to publish all the jars/poms and the zip 
to our local artifactory when we run our build. We have another project which 
pulls down all of these dependencies including the zip to build our actual solr 
deploy and a maven assembly which unpacks the zip file and extracts all of the 
webapp for our real distribution. 

I haven't upstreamed the changes for the ant tasks thinking there wouldn't be 
too much interest in that, but I could put together a patch if there is.

The changes do the following:

- Packages the zip along with the parent pom if a flag is set
- Allows changing group which the poms are published to. For example instead of 
org.apache you can push it as com.xxx to avoid shadowing conflicts in your 
local repository.

On Tue, Oct 18, 2016 at 8:42 AM David Smiley  wrote:

Thanks for bringing this up, Greg.  I too have felt the pain of this in the 
move away from a WAR file in a project or two.  In one of the projects that 
comes to mind, we built scripts that re-constituted a Solr distribution from 
artifacts in Maven. For anything that wasn't in Maven (e.g. the admin UI pages, 
Jetty configs), we checked it into source control.  In hind sight... the 
simplicity of what you list as (1) -- check the distro zip into a Maven repo 
local to the organization sounds better... but I may be forgetting requirements 
that led us not to do this.  I look forward to that zip shrinking once the docs 
are gone.  Another option, depending on one's needs, is to pursue Docker, which 
I've lately become a huge fan of.  I think Docker is particularly great for 
integration tests.  Does the scenario you wish to use the assets for relate to 
testing or some other use-case?

~ David


On Mon, Oct 17, 2016 at 7:58 PM Greg Pendlebury  
wrote:

Are there any developers with a current working maven build for a downstream 
Solr installation? ie. Not a build for Solr itself, but a build that brings in 
the core Solr server plus local plugins, third party plugins etc?

I am in the process of updating one of our old builds (it builds both the 
application and various shard instances) and have hit a stumbling block in 
sourcing the dashboard static assets (everything under /webapp/web in Solr's 
source).

Prior to the move away from being a webapp I could get them by exploding the 
war from Maven Central.

In our very first foray into 5.x we had a local custom build to patch 
SOLR-2649. We avoided solving this problem then by pushing the webapp into our 
local Nexus as part of that build... but that wasn't a very good long term 
choice.

So now I'm trying to work out the best long term approach to take here. Ideas 
so far:

  1)Manually download the required zip and add it into our Nexus repository as 
a 3rd party artifact. Maven can source and extract anything it needs from here. 
This is where I'm currently leaning for simplicity, but the manual step 
required is annoying. It does have the advantage of causing a build failure 
straight away when a version upgrade occurs, prompting the developer to look 
into why.

  2)Move a copy of the static assets for the dashboard into our project and 
deploy them ourselves. This has the advantage of aligning our approach with the 
resources we already maintain in the project (like core.properties, schema.xml, 
solrconfig.xml, logging etc.). But I am worried that it is really fragile and 
developers will miss it during a version upgrade, resulting in the dashboard 
creeping out-of-date and (worse) introducing subtle bugs because of a version 
mismatch between the UI and the underlying server code.
  3)I'd like to think a long term approach would be for the core Solr build to 
ship a JAR (or any other assembly) to Maven Central like 'solr-dashboard'... 
but I'm not sure how that aligns with the move away from Solr being considered 
a webapp. It seems a shame that all of the Java code ends up in Maven central, 
but the web layer dead-ends in the ant build.
I might be missing something really obvious and there is already a way to do 
this. Is there some other distribution of the dashboard statics? Other than the 
downloadable zip that is.

Ta,
Greg

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: 

[jira] [Commented] (SOLR-9483) Add SolrJ support for the modify collection API

2016-10-18 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585871#comment-15585871
 ] 

Shalin Shekhar Mangar commented on SOLR-9483:
-

Since the time this issue was created, people have changed the solrj impl to 
favor methods like createCollection etc instead of a class for each API. So in 
keeping with the convention, a single modifyCollection method that can change 
all properties should be sufficient.

> Add SolrJ support for the modify collection API
> ---
>
> Key: SOLR-9483
> URL: https://issues.apache.org/jira/browse/SOLR-9483
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrCloud, SolrJ
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, newdev
> Fix For: 6.3, master (7.0)
>
>
> SolrJ currently does not have a method corresponding to the modify collection 
> API. There should be a Modify class inside CollectionAdminRequest and a 
> simple method to change all parameters supported by the modify API.
> Link to modify API documentation: 
> https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-modifycoll



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9660) in GroupingSpecification factor [group](sort|offset|limit) into [group](sortSpec)

2016-10-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9660:
--
Attachment: SOLR-9660.patch

Attaching work-in-progress patch, it inexplicably fails one of the 
TestDistributedGrouping tests (details below) with visually the difference 
apparently only in the 'start' element of the response but not the response 
elements themselves. SOLR-9649 was also unexpected and could be related.

{code}
> Throwable #1: junit.framework.AssertionFailedError: 
> .grouped[a_i1].doclist.start:5!=0
>at 
> __randomizedtesting.SeedInfo.seed([E195797B46E2FF35:69C146A1E81E92CD]:0)
>at junit.framework.Assert.fail(Assert.java:50)
>at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:913)
>at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:932)
>at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:607)
>at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:587)
>at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:566)
>at 
> org.apache.solr.TestDistributedGrouping.test(TestDistributedGrouping.java:170)
>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1011)
>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>at java.lang.Thread.run(Thread.java:745)
{code}

> in GroupingSpecification factor [group](sort|offset|limit) into 
> [group](sortSpec)
> -
>
> Key: SOLR-9660
> URL: https://issues.apache.org/jira/browse/SOLR-9660
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9660.patch
>
>
> This is split out and adapted from and towards the SOLR-6203 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9660) in GroupingSpecification factor [group](sort|offset|limit) into [group](sortSpec)

2016-10-18 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9660:
-

 Summary: in GroupingSpecification factor 
[group](sort|offset|limit) into [group](sortSpec)
 Key: SOLR-9660
 URL: https://issues.apache.org/jira/browse/SOLR-9660
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


This is split out and adapted from and towards the SOLR-6203 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-10-18 Thread Tim Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585826#comment-15585826
 ] 

Tim Owen commented on SOLR-5750:


[~dsmiley] you mentioned in the mailing list back in March that you'd fixed the 
situation where restored collections are created using the old stateFormat=1 
but it still seems to be doing that ... did that fix not make it into this 
ticket before merging? We've been trying out the backup/restore and noticed 
it's putting the collection's state into the global clusterstate.json instead 
of where it should be.


> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 6.1
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-10-18 Thread Tim Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585823#comment-15585823
 ] 

Tim Owen commented on SOLR-5750:


[~dsmiley] you mentioned in the mailing list back in March that you'd fixed the 
situation where restored collections are created using the old stateFormat=1 
but it still seems to be doing that ... did that fix not make it into this 
ticket before merging? We've been trying out the backup/restore and noticed 
it's putting the collection's state into the global clusterstate.json instead 
of where it should be.


> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 6.1
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585801#comment-15585801
 ] 

Noble Paul commented on SOLR-9506:
--

If the above case fails, let's revert the commit and revisit the fingerprint 
computation

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7503) Undeprecate o.o.l.util.LongValues

2016-10-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7503:
-
Attachment: LUCENE-7503.patch

Here is a patch.

> Undeprecate o.o.l.util.LongValues
> -
>
> Key: LUCENE-7503
> URL: https://issues.apache.org/jira/browse/LUCENE-7503
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7503.patch
>
>
> This is a follow-up to 
> http://search-lucene.com/m/l6pAi1iMlPb2wx51P=plan+for+getGlobalOrds+gt+LongValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7503) Undeprecate o.o.l.util.LongValues

2016-10-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7503:


 Summary: Undeprecate o.o.l.util.LongValues
 Key: LUCENE-7503
 URL: https://issues.apache.org/jira/browse/LUCENE-7503
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Priority: Minor


This is a follow-up to 
http://search-lucene.com/m/l6pAi1iMlPb2wx51P=plan+for+getGlobalOrds+gt+LongValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585781#comment-15585781
 ] 

ASF subversion and git services commented on SOLR-9506:
---

Commit 9aa764a54f50eca5a8ef805bdb29e4ad90fcce5e in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9aa764a ]

* SOLR-9506: cache IndexFingerprint for each segment


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585759#comment-15585759
 ] 

Yonik Seeley commented on SOLR-9506:



The above manual test only exhibited this bad behavior after the commit today.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585756#comment-15585756
 ] 

Yonik Seeley commented on SOLR-9506:



Not sure I understand... are you suggesting a workaround in PeerSync 
(recoverWithReplicationOnly) to work around the correctness problem caused by 
this commit?


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585736#comment-15585736
 ] 

Pushkar Raste commented on SOLR-9506:
-

There is lot of confusion going on here. Would above test fail not fail, if we 
won't cache per segment indexfingerprint ?
If yes, them we should revert the commit, if not we should open a new issue to 
fix the indexfingerprint computation altogether. 


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585709#comment-15585709
 ] 

Yonik Seeley commented on SOLR-9506:


Pretty simple to try out:
{code}
bin/solr start -e techproducts

http://localhost:8983/solr/techproducts/query?q=*:*
  "response":{"numFound":32,"start":0,"docs":[

http://localhost:8983/solr/techproducts/get?getFingerprint=9223372036854775807
{
  "fingerprint":{
"maxVersionSpecified":9223372036854775807,
"maxVersionEncountered":1548538118066405376,
"maxInHash":1548538118066405376,
"versionsHash":8803836617561505377,
"numVersions":32,
"numDocs":32,
"maxDoc":32}}

curl http://localhost:8983/solr/techproducts/update?commit=true -H 
"Content-Type: text/xml" -d 'apple'

# this shows that the delete is visibie
http://localhost:8983/solr/techproducts/query?q=*:*
  "response":{"numFound":31,"start":0,"docs":[

#fingerprint returns the same thing
http://localhost:8983/solr/techproducts/get?getFingerprint=9223372036854775807
{
  "fingerprint":{
"maxVersionSpecified":9223372036854775807,
"maxVersionEncountered":1548538118066405376,
"maxInHash":1548538118066405376,
"versionsHash":8803836617561505377,
"numVersions":32,
"numDocs":32,
"maxDoc":32}}

bin/solr stop -all
bin/solr start -e techproducts

#after a restart, fingerprint returns something different
http://localhost:8983/solr/techproducts/get?getFingerprint=9223372036854775807
{
  "fingerprint":{
"maxVersionSpecified":9223372036854775807,
"maxVersionEncountered":1548538118066405376,
"maxInHash":1548538118066405376,
"versionsHash":-131508374066080,
"numVersions":31,
"numDocs":31,
"maxDoc":32}}

{code}

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585707#comment-15585707
 ] 

Pushkar Raste commented on SOLR-9506:
-

I think what Yonik is implying is that, if for some reason, replica does not 
apply delete properly, index fingerprint would still checkout and that would be 
a problem.

Considering the issues with {{PeerSync}}, should add that option  
{{recoverWithReplicationOnly}} ? For most of the setups I doubt if people would 
have hundreds of thousands of records in updateLog in which which almost no one 
is using {{PeerSync}} anyway

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585700#comment-15585700
 ] 

Yonik Seeley commented on SOLR-9506:



Yep.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9627) add QParser.getSortSpec, deprecate misleadingly named QParser.getSort

2016-10-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9627.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> add QParser.getSortSpec, deprecate misleadingly named QParser.getSort
> -
>
> Key: SOLR-9627
> URL: https://issues.apache.org/jira/browse/SOLR-9627
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9627.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585694#comment-15585694
 ] 

Keith Laban commented on SOLR-9506:
---

Are you implying that if you add a document. commit it, compute the index 
fingerprint and cache the segments. Then delete that document and commit that 
change, and compute the fingerprint again with the cached segment fingerprint, 
you will end up with the same index fingerprint?

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Keith Laban (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Laban updated SOLR-9506:
--
Comment: was deleted

(was: Are you implying that if you add a document. commit it, compute the index 
fingerprint and cache the segments. Then delete that document and commit that 
change, and compute the fingerprint again with the cached segment fingerprint, 
you will end up with the same index fingerprint?)

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585693#comment-15585693
 ] 

Keith Laban commented on SOLR-9506:
---

Are you implying that if you add a document. commit it, compute the index 
fingerprint and cache the segments. Then delete that document and commit that 
change, and compute the fingerprint again with the cached segment fingerprint, 
you will end up with the same index fingerprint?

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585688#comment-15585688
 ] 

Pushkar Raste commented on SOLR-9506:
-

i.e. we really need fix IndexFingerprint computation, whether or not we cache. 
I will open a separate issue to fix it in that case.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585683#comment-15585683
 ] 

Yonik Seeley edited comment on SOLR-9506 at 10/18/16 3:02 PM:
--

bq. "Right... the core cache key does not change, even if there are deletes for 
the segment."

So the cache key ignores deleted documents, while the value being cached does 
not.  It's a fundamental mis-match.


was (Author: ysee...@gmail.com):
"Right... the core cache key does not change, even if there are deletes for the 
segment."

So the cache key ignores deleted documents, while the value being cached does 
not.  It's a fundamental mis-match.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585683#comment-15585683
 ] 

Yonik Seeley commented on SOLR-9506:


"Right... the core cache key does not change, even if there are deletes for the 
segment."

So the cache key ignores deleted documents, while the value being cached does 
not.  It's a fundamental mis-match.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585677#comment-15585677
 ] 

Pushkar Raste commented on SOLR-9506:
-

I don't see why caching indexfingerprint per segment and using that later would 
be different than computing indexfingerprint on entire segment by going through 
one segment at time. 

I tried to come up with scenarios where caching solution would fail and 
original solution would not, but could not think of any. 


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8332) factor HttpShardHandler[Factory]'s url shuffling out into a ReplicaListTransformer class

2016-10-18 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8332:
--
Attachment: SOLR-8332.patch

Attaching updated patch (package line moved to after license header and one 
unused import removed).

[~noble.paul] - if you would have a few moments at some point to review the 
patch that would be much appreciated. Thank you.

> factor HttpShardHandler[Factory]'s url shuffling out into a 
> ReplicaListTransformer class
> 
>
> Key: SOLR-8332
> URL: https://issues.apache.org/jira/browse/SOLR-8332
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8332.patch, SOLR-8332.patch, SOLR-8332.patch
>
>
> Proposed patch against trunk to follow. No change in behaviour intended. This 
> would be as a step towards SOLR-6730.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585635#comment-15585635
 ] 

Yonik Seeley commented on SOLR-9506:


Hmmm, why was this committed?
See my comments regarding deleted documents that were never addressed.  What 
was committed will now result in incorrect fingerprints being returned.

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7850) Move user customization out of solr.in.* scripts

2016-10-18 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-7850:
---
 Assignee: David Smiley
Fix Version/s: 6.3

> Move user customization out of solr.in.* scripts
> 
>
> Key: SOLR-7850
> URL: https://issues.apache.org/jira/browse/SOLR-7850
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: David Smiley
>Priority: Minor
> Fix For: 6.3
>
> Attachments: 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch, 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch
>
>
> I've seen a fair number of users customizing solr.in.* scripts to make 
> changes to their Solr installs.  I think the documentation suggests this, 
> though I haven't confirmed.
> One possible problem with this is that we might make changes in those scripts 
> which such a user would want in their setup, but if they replace the script 
> with the one in the new version, they will lose their customizations.
> I propose instead that we have the startup script look for and utilize a user 
> customization script, in a similar manner to linux init scripts that look for 
> /etc/default/packagename, but are able to function without it.  I'm not 
> entirely sure where the script should live or what it should be called.  One 
> idea is server/etc/userconfig.\{sh,cmd\} ... but I haven't put a lot of 
> thought into it yet.
> If the internal behavior of our scripts is largely replaced by a small java 
> app as detailed in SOLR-7043, then the same thing should apply there -- have 
> a config file for a user to specify settings, but work perfectly if that 
> config file is absent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7850) Move user customization out of solr.in.* scripts

2016-10-18 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-7850:
---
Attachment: SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch

Here's an updated patch that addresses the Windows side.  I also did a little 
tweaking to make the declaration order a little more consistent between the 
Bash & Windows scripts.  I did a little testing in Windows but I should do more.

[~janhoy] might you take a look please?

> Move user customization out of solr.in.* scripts
> 
>
> Key: SOLR-7850
> URL: https://issues.apache.org/jira/browse/SOLR-7850
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch, 
> SOLR_7850_move_bin_solr_in_sh_defaults_into_bin_solr.patch
>
>
> I've seen a fair number of users customizing solr.in.* scripts to make 
> changes to their Solr installs.  I think the documentation suggests this, 
> though I haven't confirmed.
> One possible problem with this is that we might make changes in those scripts 
> which such a user would want in their setup, but if they replace the script 
> with the one in the new version, they will lose their customizations.
> I propose instead that we have the startup script look for and utilize a user 
> customization script, in a similar manner to linux init scripts that look for 
> /etc/default/packagename, but are able to function without it.  I'm not 
> entirely sure where the script should live or what it should be called.  One 
> idea is server/etc/userconfig.\{sh,cmd\} ... but I haven't put a lot of 
> thought into it yet.
> If the internal behavior of our scripts is largely replaced by a small java 
> app as detailed in SOLR-7043, then the same thing should apply there -- have 
> a config file for a user to specify settings, but work perfectly if that 
> config file is absent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585612#comment-15585612
 ] 

Pushkar Raste commented on SOLR-9506:
-

I did not upload the patch with parallelStream. In SolrIndexSearcher where we 
compute and cache per segment indexfingerprint try switching from {{stream()}} 
to {{parallelStream()}} and you will see {{PeerSyncTest}} fails. 

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18064 - Unstable!

2016-10-18 Thread Wang Weijun

> On Oct 18, 2016, at 8:34 PM, Uwe Schindler  wrote:
> 
> ...

> Nevertheless, the "original" issue with the symlinked home directory should 
> be solved separately. I made a proposal to Max (Weijun Wang) how to fix this 
> while reading the policy file. We fixed the problem locally by fixing the 
> Jenkins User account running the tests to not have a symlinked user.home dir 
> anymore.

I still feel hesitated to grant an extra permission for all FilePermission in a 
policy file, because that might not be what the user always wanted.

How about adding a modifier to the line, something like

   permission java.io.FilePermission "${user.home}${/}.ivy2${/}cache${/}-", 
"read", canonicalized;

which means when the permission is created its name should be canonicalized. 

With this modifier, if the canonicalized name is different, it will not permit 
access using the symlink.

The format is backward compatible with jdk8 because the modifier will be 
ignored.

Thanks
Max


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 526 - Failure!

2016-10-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/526/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 65996 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj324144343
 [ecj-lint] Compiling 700 source files to 
C:\Users\jenkins\AppData\Local\Temp\ecj324144343
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\OverseerTest.java
 (at line 267)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\OverseerTest.java
 (at line 317)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\cloud\TestTolerantUpdateProcessorRandomCloud.java
 (at line 19)
 [ecj-lint] import javax.ws.rs.HEAD;
 [ecj-lint]
 [ecj-lint] The import javax.ws.rs.HEAD is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\core\HdfsDirectoryFactoryTest.java
 (at line 146)
 [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory();
 [ecj-lint]  ^^^
 [ecj-lint] Resource leak: 'hdfsFactory' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\admin\SecurityConfHandlerTest.java
 (at line 53)
 [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin();
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'basicAuth' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\component\DistributedDebugComponentTest.java
 (at line 163)
 [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : 
collection2;
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'client' is never closed
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\handler\component\DistributedDebugComponentTest.java
 (at line 221)
 [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), 
e);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'client' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
 (at line 185)
 [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
 (at line 188)
 [ecj-lint] OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\highlight\HighlighterTest.java
 (at line 192)
 [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\search\TestDocSet.java
 (at line 241)
 [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
HashDocSet(a,0,n);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test\org\apache\solr\search\TestDocSet.java
 (at line 531)
 [ecj-lint] DocSet a = new BitDocSet(bs);
 [ecj-lint] 

[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585570#comment-15585570
 ] 

Noble Paul commented on SOLR-9506:
--

which test. I did not find?

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585553#comment-15585553
 ] 

ASF subversion and git services commented on SOLR-9506:
---

Commit bb907a2983b4a7eba8cb4d527a859f1b312bdc79 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bb907a2 ]

* SOLR-9506: cache IndexFingerprint for each segment


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7489) Improve sparsity support of Lucene70DocValuesFormat

2016-10-18 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585546#comment-15585546
 ] 

Adrien Grand commented on LUCENE-7489:
--

The only difference that I could find is that we now wrap twice instead of only 
once before when gcd compression is enabled. I changed it, which yielded a ~2% 
improvement on wikimedium1m. This is far from the ~8% that the nightly 
benchmarks report, but it could be that the differences in the dataset explain 
it. I'll keep watching this benchmark over the next days.

> Improve sparsity support of Lucene70DocValuesFormat
> ---
>
> Key: LUCENE-7489
> URL: https://issues.apache.org/jira/browse/LUCENE-7489
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7489.patch, LUCENE-7489.patch
>
>
> Like Lucene70NormsFormat, it should be able to only encode actual values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7489) Improve sparsity support of Lucene70DocValuesFormat

2016-10-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585541#comment-15585541
 ] 

ASF subversion and git services commented on LUCENE-7489:
-

Commit a17e92006f087a0601d9329bf9b9c946ca72478b in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a17e920 ]

LUCENE-7489: Wrap only once in case GCD compression is used.


> Improve sparsity support of Lucene70DocValuesFormat
> ---
>
> Key: LUCENE-7489
> URL: https://issues.apache.org/jira/browse/LUCENE-7489
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7489.patch, LUCENE-7489.patch
>
>
> Like Lucene70NormsFormat, it should be able to only encode actual values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >