[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-jigsaw-ea+110) - Build # 16273 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16273/
Java: 64bit/jdk-9-jigsaw-ea+110 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0}
at 
__randomizedtesting.SeedInfo.seed([DDEF2CA214CF28B1:55BB1378BA334549]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:165)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203112#comment-15203112
 ] 

Robert Muir commented on LUCENE-7114:
-

The only issue is, now the onus is on me to fix this? I think this build of 
java 9 is broken, don't disable compact strings, let jenkins fail!

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203106#comment-15203106
 ] 

Shawn Heisey edited comment on SOLR-6806 at 3/20/16 4:59 AM:
-

Here are the obvious things to move to their own artifacts, and their .zip 
sizes (in KB, for precision).  I made a stab at a name for the zip version of 
the archive.

contrib: 68423KB (solr-contrib-x.x.x.zip)
dist: 17960KB (solr-jars-x.x.x.zip)
docs: 11893KB (solr-docs-x.x.x.zip)
example: 4265KB (solr-examples-x.x.x.zip)

The kuromoji and hadoop jars from WEB-INF/lib could be placed in another 
artifact.  Not sure what to call it, perhaps solr-extras.

The idea with each of these supporting artifacts is that they would be 
extracted to the same location as the main artifact, so they would contain a 
similar directory structure.  Not sure whether we would omit the solr-x.x.x 
top-level directory that is in the main artifact.  Most people who have .tgz 
experience would expect it to be there, but zip users might be confused.


was (Author: elyograg):
Here are the obvious things to move to their own artifacts, and their .zip 
sizes (in KB, for precision).  I made a stab at a name for the zip version of 
the archive.

contrib: 68423KB (solr-contrib-x.x.x.zip)
dist: 17960KB (solr-jars-x.x.x.zip)
docs: 11893KB (solr-docs-x.x.x.zip)
example: 4265KB (solr-examples-x.x.x.zip)

I would always use the .tgz archives in production, but since the machine where 
I'm doing all this experimentation is Windows, this info is all about the 
zipfiles.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8856) Do not cache merges in the hdfs block cache.

2016-03-19 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8856:
-

 Summary: Do not cache merges in the hdfs block cache.
 Key: SOLR-8856
 URL: https://issues.apache.org/jira/browse/SOLR-8856
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller


Generally the block cache will not be large enough to contain the whole index 
and merges can thrash the cache for queries. Even if we still look in the 
cache, we should not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203106#comment-15203106
 ] 

Shawn Heisey commented on SOLR-6806:


Here are the obvious things to move to their own artifacts, and their .zip 
sizes (in KB, for precision).  I made a stab at a name for the zip version of 
the archive.

contrib: 68423KB (solr-contrib-x.x.x.zip)
dist: 17960KB (solr-jars-x.x.x.zip)
docs: 11893KB (solr-docs-x.x.x.zip)
example: 4265KB (solr-examples-x.x.x.zip)

I would always use the .tgz archives in production, but since the machine where 
I'm doing all this experimentation is Windows, this info is all about the 
zipfiles.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203104#comment-15203104
 ] 

Ishan Chattopadhyaya commented on SOLR-8865:


I think this should be made a blocker for 6.0.

> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8865.patch, SOLR-8865.patch, SOLR-8865.patch, 
> SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SolrCloud App Unit Testing

2016-03-19 Thread Madhire, Naveen
Hi,

I am writing a Solr Application, can anyone please let me know how to Unit test 
the application?

I see we have MiniSolrCloudCluster class available in Solr, but I am confused 
about how to use that for Unit testing.

How should I create a embedded server for unit testing?



Thanks,
Naveen


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203103#comment-15203103
 ] 

Ishan Chattopadhyaya commented on SOLR-8082:


+1, LGTM.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8626) [ANGULAR] 404 error when clicking nodes in cloud graph view

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203100#comment-15203100
 ] 

ASF GitHub Bot commented on SOLR-8626:
--

GitHub user treygrainger opened a pull request:

https://github.com/apache/lucene-solr/pull/23

SOLR-8626: Fix urls for nodes in cloud graph view

This fixes SOLR-8626 (identical patch submitted on JIRA) by removing the 
invalid (404) links on collections and cores in the graph view. The issue 
existed - and has been fixed - in both the flat graph view and the radial view. 
Additionally, when one was in the radial view and clicked on the link for a 
node, it would switch back to flat graph view when navigating to the other 
node, so the patch also improves the link in the radial view so that it 
preserves the user's current view type on the URL when navigating between nodes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/treygrainger/lucene-solr SOLR-8626

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/23.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #23


commit dc4a490eb4edf49972849eb42b604fe490a88d75
Author: Trey Grainger 
Date:   2016-03-20T04:21:16Z

SOLR-8626: Fix urls for nodes in cloud graph view




> [ANGULAR] 404 error when clicking nodes in cloud graph view
> ---
>
> Key: SOLR-8626
> URL: https://issues.apache.org/jira/browse/SOLR-8626
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Upayavira
> Attachments: SOLR-8626.patch
>
>
> h3. Reproduce:
> # {{bin/solr start -c}}
> # {{bin/solr create -c mycoll}}
> # Goto http://localhost:8983/solr/#/~cloud
> # Click a collection name in the graph -> 404 error. URL: 
> {{/solr/mycoll/#/~cloud}}
> # Click a shard name in the graph -> 404 error. URL: {{/solr/shard1/#/~cloud}}
> Only verified in Trunk, but probably exists in 5.4 as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8626: Fix urls for nodes in cloud g...

2016-03-19 Thread treygrainger
GitHub user treygrainger opened a pull request:

https://github.com/apache/lucene-solr/pull/23

SOLR-8626: Fix urls for nodes in cloud graph view

This fixes SOLR-8626 (identical patch submitted on JIRA) by removing the 
invalid (404) links on collections and cores in the graph view. The issue 
existed - and has been fixed - in both the flat graph view and the radial view. 
Additionally, when one was in the radial view and clicked on the link for a 
node, it would switch back to flat graph view when navigating to the other 
node, so the patch also improves the link in the radial view so that it 
preserves the user's current view type on the URL when navigating between nodes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/treygrainger/lucene-solr SOLR-8626

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/23.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #23


commit dc4a490eb4edf49972849eb42b604fe490a88d75
Author: Trey Grainger 
Date:   2016-03-20T04:21:16Z

SOLR-8626: Fix urls for nodes in cloud graph view




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203097#comment-15203097
 ] 

Shawn Heisey commented on SOLR-6806:


Ouch.  That represents a fairly significant drop in the size of the contrib 
folder, and a small drop in the overall size of the artifacts that a release 
manager must upload.

I actually would have suggested the one in analysis-extras as the one to keep.  
I use the ICU classes from Lucene, so it's logical for that to be the one I'd 
expect to be there.  In the end, I don't really care which one is kept, as long 
as there's general consensus.  I haven't got any clue about which of those 
contrib modules gets used more often.

We could drop a symlink into one of those locations in the .tgz archive.


> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 19 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/19/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([B4BFCF00AE46A66B:3CEBF0DA00BACB93]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8867:
---
Attachment: SOLR-8867.patch

Here's an updated patch that modifies a random range test to include docs w/o a 
value in the field and also queries across negative values.

This also changes getRangeScorer() to use LeafReaderContext to be consistent 
with everything else.

All tests pass, and I plan on committing shortly.

> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8867.patch, SOLR-8867.patch
>
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16272 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16272/
Java: 32bit/jdk-9-jigsaw-ea+110 -client -XX:+UseParallelGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9344, 
name=testExecutor-4537-thread-12, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9344, name=testExecutor-4537-thread-12, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([6A53BE5D2E9D5E0D:E2078187806133F5]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44343/dd
at __randomizedtesting.SeedInfo.seed([6A53BE5D2E9D5E0D]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@9-ea/ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@9-ea/ThreadPoolExecutor.java:632)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:44343/dd
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(java.base@9-ea/Native Method)
at 
java.net.SocketInputStream.socketRead(java.base@9-ea/SocketInputStream.java:116)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:170)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11489 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_6A53BE5D2E9D5E0D-001/init-core-data-001
   [junit4]   2> 1044222 INFO  
(SUITE-UnloadDistributedZkTest-seed#[6A53BE5D2E9D5E0D]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /dd/
   [junit4]   2> 1044224 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[6A53BE5D2E9D5E0D]) [] 

[jira] [Commented] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to know

2016-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203069#comment-15203069
 ] 

David Smiley commented on SOLR-8862:


I hope this can get improved/resolved.  I didn't chase it down as far but I too 
had frustrations developing a test using MiniSolrCloudCluster that simply 
wanted the collection to be searchable (in SOLR-5750).

> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Attachment: SOLR-8878.patch

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Attachments: SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to

2016-03-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200194#comment-15200194
 ] 

Hoss Man edited comment on SOLR-8862 at 3/17/16 10:01 PM:
--

Ok, so here's what i've found so far...

* Just adding a single line of logging to my test after {{configureCluster}} 
and before {{cluster.createCollection}} was enough to make the seed start 
passing fairly reliably.
** so clearly a finicky timing problem
* {{MiniSolrCloudCluster}}'s constructor has logic that waits for 
{{/live_nodes}} have {{numServer}} children before returning
** this was added in SOLR-7146 precisely because of problems like the one i'm 
seeing
** if there aren't the expected number of {{/live_nodes}} the first time it 
checks, then it sleeps in 1 second increments until there are.
* {{/live_nodes}} get's populated by {{ZkController.createEphemeralLiveNode}}
** -*THIS METHOD IS SUSPICIOUSLY CALLED IN TWO DIFF PLACES:*-
**# EDIT: this is actualy part of an {{OnReconnect}} handler that I 
misconstrued as something that would be called on the initial connect. -fairly 
early in the {{ZkController}} constructor-...{code}
// we have to register as live first to pick up docs in the buffer
createEphemeralLiveNode();
{code}
**# again as the very last thing in {{ZkControlle.init}}...{code}
// Do this last to signal we're up.
createEphemeralLiveNode();
{code}...this line+comment added in recently in SOLR-8696 when it replaced 
another previously existing call to {{createEphemeralLiveNode}} that was 
earlier in the init method (see 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commitdiff;h=8ac4fdd;hp=7d32456efa4ade0130c3ed0ae677aa47b29355a9
 )
* Even if {{/live_nodes}} were only populated as the very last line in 
{{ZkController.init}}, that's far from the last thing that happens when a solr 
node starts up. Things that happen after {{ZkController}} is initialized but 
before {{CoreContainer.createAndLoad}} returns and the {{SolrDispatchFilter}} 
starts accepting requests:
** {{ZkContainer.initZooKeeper}}...
*** whatever the hell this is suppose to do...{code}
if (zkRun != null && zkServer.getServers().size() > 1 && confDir == null && 
boostrapConf == false) {
  // we are part of an ensemble and we are not uploading the config - pause to 
give the config time
  // to get up
  Thread.sleep(1);
}
{code}
*** any node that has a confDir uploads it to zk: 
{{configManager.uploadConfigDir(configPath, confName);}} (even if it's not 
bootstrapping???)
*** any node that *IS* doing bootstrap does that: 
{{ZkController.bootstrapConf(zkController.getZkClient(), cc, solrHome);}}
** {{CoreContainer.load()}}...
*** Authentication plugins are initialized
*** core * collection & configset & container handlers are initialized
*** *{{CoreDescriptor}} FOR EACH CORE DIR ON DISK ARE LOADED*
 which of course means opening transaction logs, opening indexwriters, open 
searchers, newSearcher event listeners, etc...
*** {{ZkController.checkOverseerDesignate()}} is called (no idea what that does)


Which all leads me to the following conclusions...

# when using {{MiniSolrCloudCluster}}, if you are lucky, there will be at least 
one node not yet in {{/live_nodes} when it does it's first check, and then it 
will sleep 1 second giving those nodes time to _actually_ startup & load their 
cores, and hopefully at least one of them will be completley finished by the 
time you actaully try to use a {{CloudSolrClient}} pointed at that ZK 
{{/live_nodes}} data.
# unless there is some other "i'm alive" data in ZK that 
{{MiniSolrCloudCluster}} should be consulting, it seems like it's doing the 
best it can to ensure that all the nodes are live before returning to the caller
# *This does not seem like a probably that only affects tests.*  This seems 
like a real world problem we shoudl address -- {{CloudSolrClient}} should be 
able to consult some info in ZK to know when a node is _really_ alive and ready 
for requests.
#* if there is a reason why the {{/live_nodes}} entry needs to be created as 
early as it is (ie: {{// we have to register as live first to pick up docs in 
the buffer}}) then it should only be created that one time and some other 
ephemeral node should be used
#* whatever ephemeral node is used should be created by a very explicit very 
special method call made as the very last thing in {{SolrDispatchFilter}}



was (Author: hossman):

Ok, so here's what i've found so far...

* Just adding a single line of logging to my test after {{configureCluster}} 
and before {{cluster.createCollection}} was enough to make the seed start 
passing fairly reliably.
** so clearly a finicky timing problem
* {{MiniSolrCloudCluster}}'s constructor has logic that waits for 
{{/live_nodes}} have {{numServer}} children before returning
** this was added in SOLR-7146 precisely because of problems like the one i'm 
seeing
** if there aren't the 

[jira] [Created] (SOLR-8878) Allow the DaemonStream run rate to be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8878:


 Summary: Allow the DaemonStream run rate to be controlled by the 
internal stream
 Key: SOLR-8878
 URL: https://issues.apache.org/jira/browse/SOLR-8878
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein


Currently the DaemonStream sleeps for one second and then checks the 
runInterval param to determine if it needs to rerun the internal stream.

This setup will work fine if the runInterval is longer then one second and if 
it never changes. But with the TopicStream, you want a variable run rate. For 
example if the TopicStream's latest run has returned documents, the next run 
should be immediate. But if the TopicStream's latest run returned zero 
documents then you'd want to sleep for a period of time before starting the 
next run.

This ticket allows the internal stream to control the DaemonStream run rate by 
adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
DaemonStream will check the EOF Tuple from the internal stream and if the 
sleepMillis key-pair is present it will adjust it's run rate accordingly.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Summary: Allow the DaemonStream run rate be controlled by the internal 
stream  (was: Allow the DaemonStream run rate to be controlled by the internal 
stream)

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_72) - Build # 182 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/182/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9728, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=9732, 
name=pool-3-thread-1, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=9730, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=9727, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=9731, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=9729, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 

[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203043#comment-15203043
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

Another data point. We have two copies of the icu4j-54.1.jar library at 11Mb 
each (in Solr 5.5). They are at:
{quote}
./contrib/analysis-extras/lib/icu4j-54.1.jar
./contrib/extraction/lib/icu4j-54.1.jar
{quote}

We probably only need one of them; I am guessing the one in the /extraction.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_72) - Build # 59 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/59/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=11440, 
name=SocketProxy-Request-50097:49680, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=11440, name=SocketProxy-Request-50097:49680, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([2AE91459B16B2AD2]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Sun Mar 20 01:23:57 
CET 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Sun Mar 20 01:23:57 CET 2016
at 
__randomizedtesting.SeedInfo.seed([2AE91459B16B2AD2:F142149FB4434361]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1422)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-jigsaw-ea+110) - Build # 16271 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16271/
Java: 64bit/jdk-9-jigsaw-ea+110 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=21, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=21, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([40242F81C1442895]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=21, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=21, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([40242F81C1442895]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
No command builder registered for name: separateAttachments near: { # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J0/temp/solr.hadoop.MorphlineMapperTest_40242F81C1442895-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28 "separateAttachments" : {} }

Stack Trace:
org.kitesdk.morphline.api.MorphlineCompilationException: No command builder 
registered for name: separateAttachments near: {
# 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J0/temp/solr.hadoop.MorphlineMapperTest_40242F81C1442895-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28
"separateAttachments" : {}
}
at 
__randomizedtesting.SeedInfo.seed([40242F81C1442895:8A505F8DAA7B61E4]:0)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:281)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:249)
at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
at 

[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198341#comment-15198341
 ] 

Anshum Gupta commented on SOLR-8765:


[~romseygeek] when do you plan on reverting the 
SolrIdentifierValidator.validate usage and behavior to not throw an exception 
directly? If you don't have the time, I can take care of it.

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197568#comment-15197568
 ] 

Mark Miller commented on SOLR-8742:
---

Nevermind, needed more than just the seed I guess.

> HdfsDirectoryTest fails reliably after changes in LUCENE-6932
> -
>
> Key: SOLR-8742
> URL: https://issues.apache.org/jira/browse/SOLR-8742
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> the following seed fails reliably for me on master...
> {noformat}
>[junit4]   2> 1370568 INFO  
> (TEST-HdfsDirectoryTest.testEOF-seed#[A0D22782D87E1CE2]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending testEOF
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
> -Dtests.method=testEOF -Dtests.seed=A0D22782D87E1CE2 -Dtests.slow=true 
> -Dtests.locale=es-PR -Dtests.timezone=Indian/Mauritius -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.13s J0 | HdfsDirectoryTest.testEOF <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A0D22782D87E1CE2:31B9658A9A5ABA9E]:0)
>[junit4]>  at 
> org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> git bisect says this is the first commit where it started failing..
> {noformat}
> ddc65d977f920013c5fca16c8ac75ae2c6895f9d is the first bad commit
> commit ddc65d977f920013c5fca16c8ac75ae2c6895f9d
> Author: Michael McCandless 
> Date:   Thu Jan 21 17:50:28 2016 +
> LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond 
> the end of the file
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1726039 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}
> ...which seems remarkable relevant and likely to indicate a problem that 
> needs fixed in the HdfsDirectory code (or perhaps just the test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201982#comment-15201982
 ] 

Steve Rowe commented on SOLR-8798:
--

I think this problem was fixed in Solr 4.10 by SOLR-6163.

Does anybody have this issue in Solr 4.10+?

> org.apache.solr.rest.RestManager can't find cyrillic synonyms.
> --
>
> Key: SOLR-8798
> URL: https://issues.apache.org/jira/browse/SOLR-8798
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9.1
>Reporter: Vitalii
>
> RestManager doesn't work well with cyrillic symbols.
> I'm able to create new synonyms via REST interface. But I have an error when 
> I try to get created synonyns with via request:
> http://localhost:8983/solr/collection1/schema/analysis/synonyms/18/ліжко
> I get this message in console log:
> {code}
> # solr/console.log
> 4591823 [qtp1281335597-14] INFO  org.apache.solr.rest.RestManager  – Resource 
> not found for /schema/analysis/synonyms/18/%D0%BB%D1%96%D0%B6%D0%BA%D0%BE, 
> looking for parent: /schema/analysis/synonyms/18
> {code}
> But in synonyms file I have row with this word:
> {code}
> # /solr/collection1/conf/_schema_analysis_synonyms_18.json
>   "initArgs":{"ignoreCase":false},
>   "initializedOn":"2016-03-07T11:57:00.116Z",
>   "updatedSinceInit":"2016-03-07T12:19:11.174Z",
>   "managedMap":{
> "ліжко":["кровать"],
> "стілець":["стул"]}}
> {code}
> This issue have been tested by multiple persons and they can confirm that 
> faced this problem too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8082:
---
Attachment: SOLR-8082.patch

Here's an updated patch that uses the same method as {!frange} for a range 
query.  Thanks to the excellent tests in the previous patches,  it helped 
uncover a limitation in ValueSourceRangeFilter that prevented including 
infinite endpoints.

I had to make one change to the tests: I removed -0 as a special value in 
testFloatAndDoubleRangeQueryRandom.  Since -0 == +0, any range including one 
should include the other.  I don't think we should support or guarantee 
behavior of "different zeros" in any case - way to many potential bugs, and it 
limits us from an implementation standpoint (different bit patterns for 
equivalent values).


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197616#comment-15197616
 ] 

Mark Miller commented on SOLR-8742:
---

Also, this is using raw RAMInputStream in this case - nothing HDFS specific in 
this fail.

> HdfsDirectoryTest fails reliably after changes in LUCENE-6932
> -
>
> Key: SOLR-8742
> URL: https://issues.apache.org/jira/browse/SOLR-8742
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> the following seed fails reliably for me on master...
> {noformat}
>[junit4]   2> 1370568 INFO  
> (TEST-HdfsDirectoryTest.testEOF-seed#[A0D22782D87E1CE2]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending testEOF
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
> -Dtests.method=testEOF -Dtests.seed=A0D22782D87E1CE2 -Dtests.slow=true 
> -Dtests.locale=es-PR -Dtests.timezone=Indian/Mauritius -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.13s J0 | HdfsDirectoryTest.testEOF <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A0D22782D87E1CE2:31B9658A9A5ABA9E]:0)
>[junit4]>  at 
> org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> git bisect says this is the first commit where it started failing..
> {noformat}
> ddc65d977f920013c5fca16c8ac75ae2c6895f9d is the first bad commit
> commit ddc65d977f920013c5fca16c8ac75ae2c6895f9d
> Author: Michael McCandless 
> Date:   Thu Jan 21 17:50:28 2016 +
> LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond 
> the end of the file
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1726039 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}
> ...which seems remarkable relevant and likely to indicate a problem that 
> needs fixed in the HdfsDirectory code (or perhaps just the test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203014#comment-15203014
 ] 

Uwe Schindler commented on LUCENE-7114:
---

FYI: I enabled the JDK 9 build 110 Jigsaw build now on Policeman Jenkins, but 
with compact strings diabled. For me this combination always passed.

This build has the other annoying bugs fixed, so we can try a while. As Lucene 
and Solr should work with the module system now, this looks like the best 
option.

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_72) - Build # 5721 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5721/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=34, 
name=ReplicationThread-indexAndTaxo, state=RUNNABLE, 
group=TGRP-IndexAndTaxonomyReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=34, name=ReplicationThread-indexAndTaxo, 
state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([34760BA8602379BF:BBF8EC08724F8A40]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([34760BA8602379BF]:0)
at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:434)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)




Build Log:
[...truncated 8159 lines...]
   [junit4] Suite: 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest
   [junit4]   2> Mac 20, 2016 12:50:04 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[ReplicationThread-indexAndTaxo,5,TGRP-IndexAndTaxonomyReplicationClientTest]
   [junit4]   2> java.lang.AssertionError: handler failed too many times: -1
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([34760BA8602379BF]:0)
   [junit4]   2>at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:434)
   [junit4]   2>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=IndexAndTaxonomyReplicationClientTest 
-Dtests.method=testConsistencyOnExceptions -Dtests.seed=34760BA8602379BF 
-Dtests.slow=true -Dtests.locale=ms -Dtests.timezone=Africa/Douala 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   2.62s J0 | 
IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions <<<
   [junit4]> Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=34, name=ReplicationThread-indexAndTaxo, 
state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest]
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([34760BA8602379BF:BBF8EC08724F8A40]:0)
   [junit4]> Caused by: java.lang.AssertionError: handler failed too many 
times: -1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([34760BA8602379BF]:0)
   [junit4]>at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:434)
   [junit4]>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene60): 
{$full_path$=PostingsFormat(name=Direct), $facets=Lucene50(blocksize=128), 
$payloads$=BlockTreeOrds(blocksize=128)}, 
docValues:{$facets=DocValuesFormat(name=Lucene54)}, maxPointsInLeafNode=1184, 
maxMBSortInHeap=5.881832353512153, sim=ClassicSimilarity, locale=ms, 
timezone=Africa/Douala
   [junit4]   2> NOTE: Windows 10 10.0 x86/Oracle Corporation 1.8.0_72 
(32-bit)/cpus=3,threads=1,free=6708744,total=23576576
   [junit4]   2> NOTE: All tests run in this JVM: [HttpReplicatorTest, 
IndexAndTaxonomyReplicationClientTest]
   [junit4] Completed [4/9 (1!)] on J0 in 4.03s, 5 tests, 1 error <<< FAILURES!

[...truncated 27 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\build.xml:740: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\build.xml:684: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\build.xml:59: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build.xml:476: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:2187:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\module-build.xml:58:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:1457:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\common-build.xml:1014:
 There were test failures: 

[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203002#comment-15203002
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

Just as a data point. Javadocs are 5Mb extra in the download and ~70Mb when 
unpacked. And most of the classes they describe are not very useful for a 
non-developer. I'd cut that and make Javadoc much more accessible through the 
main website instead.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 18 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/18/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestExitableDirectoryReader.testExitableFilterIndexReader

Error Message:
The request took too long to iterate over terms. Timeout: timeoutAt: 
452904345456667 (System.nanoTime(): 452904498005608), 
TermsEnum=org.apache.lucene.codecs.blocktree.SegmentTermsEnum@26ed4b47

Stack Trace:
org.apache.lucene.index.ExitableDirectoryReader$ExitingReaderException: The 
request took too long to iterate over terms. Timeout: timeoutAt: 
452904345456667 (System.nanoTime(): 452904498005608), 
TermsEnum=org.apache.lucene.codecs.blocktree.SegmentTermsEnum@26ed4b47
at 
__randomizedtesting.SeedInfo.seed([9C8EA8C130A21711:24EB05805B789EE8]:0)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.checkAndThrow(ExitableDirectoryReader.java:173)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.(ExitableDirectoryReader.java:163)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTerms.iterator(ExitableDirectoryReader.java:147)
at 
org.apache.lucene.index.FilterLeafReader$FilterTerms.iterator(FilterLeafReader.java:113)
at 
org.apache.lucene.index.TestExitableDirectoryReader$TestReader$TestTerms.iterator(TestExitableDirectoryReader.java:58)
at org.apache.lucene.index.Terms.intersect(Terms.java:72)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.getTermsEnum(CompiledAutomaton.java:336)
at 
org.apache.lucene.search.AutomatonQuery.getTermsEnum(AutomatonQuery.java:107)
at 
org.apache.lucene.search.MultiTermQuery.getTermsEnum(MultiTermQuery.java:304)
at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(MultiTermQueryConstantScoreWrapper.java:148)
at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(MultiTermQueryConstantScoreWrapper.java:201)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:592)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:450)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:461)
at 
org.apache.lucene.index.TestExitableDirectoryReader.testExitableFilterIndexReader(TestExitableDirectoryReader.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Updated] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8812:
--
Attachment: SOLR-8812.patch

Attaching patch with a failing test case. It is quite shocking that we 
apparently did not have test coverage of basic OR with q.op=AND already??

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 18 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/18/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([98AA4F704F6C7B13:10FE70AAE19016EB]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16262 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16262/
Java: 64bit/jdk1.8.0_72 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Task 3 did not complete, final state: FAILED expected same: was 
not:

Stack Trace:
java.lang.AssertionError: Task 3 did not complete, final state: FAILED expected 
same: was not:
at 
__randomizedtesting.SeedInfo.seed([82AA06C81FD21DE9:AFE3912B12E7011]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotSame(Assert.java:641)
at org.junit.Assert.assertSame(Assert.java:580)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testParallelCollectionAPICalls(MultiThreadedOCPTest.java:97)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5713 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5713/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([2E17A403D6536B7E:A6439BD978AF0686]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3151 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3151/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([DB8E302CE8F5819B:53DA0FF64609EC63]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (LUCENE-7117) PointRangeQuery.hashCode is inconsistent

2016-03-19 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7117:
---
Description: Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce 
different values for the same query.  (was: Like LUCENE-7085 
{PointRangeQuery.hashCode} can produce different values for the same query.)

> PointRangeQuery.hashCode is inconsistent
> 
>
> Key: LUCENE-7117
> URL: https://issues.apache.org/jira/browse/LUCENE-7117
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7117.patch
>
>
> Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce different values 
> for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-8082:
-
Attachment: SOLR-8082.patch

Patch with a few minor cleanups:

* In {{TrieField.getRangeQueryForFloatDoubleDocValues()}}:
** Made constants for invariants (bits for infinities and zeros), in case the 
compiler isn't smart enough to do that.
** Added parens in a couple of expressions to improve legibility.
* In {{DocValuesTest}}:
** In {{testFloatAndDoubleRangeQueryRandom()}}:
*** Converted several {{fieldName\[i].equals("floatdv") ? ... : ...}} trinary 
operators to use (float,double) tuples (like the other values in this test), 
using lambdas.
** In {{testFloatAndDoubleRangeQuery()}}:
*** {{negativeInfinity\[1]}} fixed: Float->Double
* bq. +1 to fixing the issue for 6.0 with the current patch (except for the 
stale comment {{If min is negative (or -0d) and max is positive (or +0d), then 
issue two range queries}}, which was left over an older patch).
** The patch fixes this too: {{s/two range queries/FunctionRangeQuery/}}

I'll commit tomorrow if no objections and nobody else gets to it first.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Michael McCandless
Welcome Kevin!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Mar 16, 2016 at 1:03 PM, David Smiley  wrote:
> Welcome Kevin!
>
> (corrected misspelling of your last name in the subject)
>
> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>>
>> I'm pleased to announce that Kevin Risden has accepted the PMC's
>> invitation to become a committer.
>>
>> Kevin, it's tradition that you introduce yourself with a brief bio.
>>
>> I believe your account has been setup and karma has been granted so that
>> you can add yourself to the committers section of the Who We Are page on the
>> website:
>> .
>>
>> Congratulations and welcome!
>>
>>
>> Joel Bernstein
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8082:
---
Priority: Blocker  (was: Major)

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8626) [ANGULAR] 404 error when clicking nodes in cloud graph view

2016-03-19 Thread Trey Grainger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Grainger updated SOLR-8626:

Attachment: SOLR-8626.patch

Attached a patch which fixes this issue. The issue existed in both the flat 
graph view and the radial view. Additionally, when one was in the radial view 
and clicked on the link for a node, it would switch back to flat graph view 
when navigating to the other node, so fixed that so that it preserves the 
user's current view type on the URL when navigating between node.

> [ANGULAR] 404 error when clicking nodes in cloud graph view
> ---
>
> Key: SOLR-8626
> URL: https://issues.apache.org/jira/browse/SOLR-8626
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Upayavira
> Attachments: SOLR-8626.patch
>
>
> h3. Reproduce:
> # {{bin/solr start -c}}
> # {{bin/solr create -c mycoll}}
> # Goto http://localhost:8983/solr/#/~cloud
> # Click a collection name in the graph -> 404 error. URL: 
> {{/solr/mycoll/#/~cloud}}
> # Click a shard name in the graph -> 404 error. URL: {{/solr/shard1/#/~cloud}}
> Only verified in Trunk, but probably exists in 5.4 as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2016-03-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197500#comment-15197500
 ] 

Shalin Shekhar Mangar commented on SOLR-7339:
-

Okay I see. In that case, can you set this as resolved so it is not a blocker 
anymore?

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: master
>
> Attachments: SOLR-7339-revert.patch, SOLR-7339.patch, 
> SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200349#comment-15200349
 ] 

David Smiley commented on SOLR-8765:


I think this issue introduced a possible problem, very likely unintended as 
it's easy to overlook.  This is a new convenience method we are to use (or a 
like-kind constructor):
{code:java}
  public static Create createCollection(String collection, String config, int 
numShards, int numReplicas) {
{code}
Notice that numShards is a primitive int.  And notice that Create.numShards is 
an object Integer.  the setNumShards method is deprecated so I'll overlook that 
as I'm not to call it.  So how am I supposed to use this for the implicit 
router in which my intent is to manage the shards myself, without setting 
numShards?  Perhaps we shall have a separate convenience method & constructor 
expressly for the implicit router?

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Martin Gainty
¡Bienvenidos Kevin!
Martín 
__  
 



Date: Wed, 16 Mar 2016 14:36:32 -0700
Subject: Re: Welcome Kevin Risden as Lucene/Solr committer
From: tomasflo...@gmail.com
To: dev@lucene.apache.org

Welcome Kevin!

On Wed, Mar 16, 2016 at 1:23 PM, Kevin Risden  wrote:
Thanks for the warm welcome. Its an honor to be invited to work on

this project and with so many great people.



Bio:

I graduated from Rose-Hulman Institute of Technology in 2012. My

undergrad revolved around software development, software testing, and

robotics. In early 2013, I joined Avalon Consulting, LLC, moved down

to Austin, TX, and first started using Solr. The focus at the time was

to use Solr as an analytics engine to power charts/graphs. From 2013

on, I worked a lot on Hadoop and Solr integrations with a continued

focus on analytics. Providing training and education are two areas

that I am really passionate about. In addition to my regular work, I

have been improving the SolrJ JDBC driver to enable more analytics use

cases.

Kevin Risden





On Wed, Mar 16, 2016 at 12:55 PM, Anshum Gupta  wrote:

> Congratulations and Welcome Kevin!

>

> On Wed, Mar 16, 2016 at 10:03 AM, David Smiley 

> wrote:

>>

>> Welcome Kevin!

>>

>> (corrected misspelling of your last name in the subject)

>>

>> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:

>>>

>>> I'm pleased to announce that Kevin Risden has accepted the PMC's

>>> invitation to become a committer.

>>>

>>> Kevin, it's tradition that you introduce yourself with a brief bio.

>>>

>>> I believe your account has been setup and karma has been granted so that

>>> you can add yourself to the committers section of the Who We Are page on the

>>> website:

>>> .

>>>

>>> Congratulations and welcome!

>>>

>>>

>>> Joel Bernstein

>>>

>> --

>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker

>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:

>> http://www.solrenterprisesearchserver.com

>

>

>

>

> --

> Anshum Gupta



-

To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

For additional commands, e-mail: dev-h...@lucene.apache.org




  

[jira] [Updated] (LUCENE-7016) Solr/Lucene 5.4.1: FastVectorHighlighter still fails with StringIndexOutOfBoundsException

2016-03-19 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-7016:
--
Attachment: SOLR-4137.patch

Here's a modified patch. It was never incorporated in Lucene. It applies to 5.5.

> Solr/Lucene 5.4.1: FastVectorHighlighter still fails with 
> StringIndexOutOfBoundsException
> -
>
> Key: LUCENE-7016
> URL: https://issues.apache.org/jira/browse/LUCENE-7016
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1
> Environment: OS X 10.10.5
>Reporter: Bjørn Hjelle
>Priority: Minor
>  Labels: fastvectorhighlighter
> Attachments: SOLR-4137.patch
>
>
> I have reported issues with highlighting of EdgeNGram fields in SOLR-7926. 
> As a workaround I now try to use an NGramField and the FastVectorHighlighter, 
> but I often hit the FastVectorHighlighter 
> StringIndexOutOfBoundsException-issue.
> Note that I use luceneMatchVersion="4.3". Without this, the whole term is 
> highlighted, not just the search-term, as I reported in SOLR-7926.
> Any help with this would be highly appreciated! (Or tips on how otherwise to 
> achieve proper highlighting of EdgeNGram and NGram-fields.)
> The issue can easily be reproduced by following these steps: 
> Download and start Solr 5.4.1, create a core:
> -
> $ wget http://apache.uib.no/lucene/solr/5.4.1/solr-5.4.1.tgz
> $ tar xvf solr-5.4.1.tgz
> $ cd solr-5.4.1
> $ bin/solr start -f 
> $ bin/solr create_core -c test -d server/solr/configsets/basic_configs
> (in a second terminal window)
> Add dynamic field and fieldtype to server/solr/test/conf/schema.xml:
> -
>  stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
>   
>   
>   
>   
>maxGramSize="20" luceneMatchVersion="4.3"/>
>   
>   
>   
>   
>   
>   
> 
> Replace existing /select requestHandler in 
> server/solr/test/conf/solrconfig.xml with:
> -
> 
>
>  explicit
>  10
>  name_ngram
>  100%
>  edismax
>  
>   name_ngram
>  
>  *
>  true
>  name_ngram 
>  true
>
>   
>   
> Stop and restart Solr
> ---  
>   
> Create and index this document: 
> --  
> $ more doc.xml 
> 
>   
> 1
> Jan-Ole Pedersen
>   
> 
> $ bin/post -c test doc.xml 
> Execute search: 
> $ curl "http://localhost:8983/solr/test/select?q=jan+ol=json=true;
> {
>   "responseHeader":{
> "status":500,
> "QTime":3,
> "params":{
>   "q":"jan ol",
>   "indent":"true",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "name_ngram":"Jan-Ole Pedersen",
> "_version_":1525256012582354944}]
>   },
>   "error":{
> "msg":"String index out of range: -6",
> "trace":"java.lang.StringIndexOutOfBoundsException: String index out of 
> range: -6\n\tat java.lang.String.substring(String.java:1954)\n\tat 
> org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(BaseFragmentsBuilder.java:180)\n\tat
>  
> org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.createFragments(BaseFragmentsBuilder.java:145)\n\tat
>  
> org.apache.lucene.search.vectorhighlight.FastVectorHighlighter.getBestFragments(FastVectorHighlighter.java:187)\n\tat
>  
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter(DefaultSolrHighlighter.java:479)\n\tat
>  
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:426)\n\tat
>  
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)\n\tat
>  
> 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+109) - Build # 16250 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16250/
Java: 64bit/jdk-9-ea+109 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1}
at 
__randomizedtesting.SeedInfo.seed([69933EC24BE2386A:E1C70118E51E5592]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:221)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 962 - Still Failing

2016-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/962/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, 
_0.si, _1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, 
_4.cfe, _4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}]> but 
was:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, _0.si, 
_1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, _4.cfe, 
_4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}, 
{indexVersion=1458142313332,generation=3,filelist=[_3.cfe, _3.cfs, _3.si, 
_6.cfe, _6.cfs, _6.si, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, 
_0.si, _1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, 
_4.cfe, _4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}]> but 
was:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, _0.si, 
_1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, _4.cfe, 
_4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}, 
{indexVersion=1458142313332,generation=3,filelist=[_3.cfe, _3.cfs, _3.si, 
_6.cfe, _6.cfs, _6.si, segments_3]}]>
at 
__randomizedtesting.SeedInfo.seed([F3C55E34BC995956:D6124504CCD15755]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-8863) zkcli: provide more granularity in config manipulation

2016-03-19 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-8863:
--

 Summary: zkcli: provide more granularity in config manipulation
 Key: SOLR-8863
 URL: https://issues.apache.org/jira/browse/SOLR-8863
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools, SolrCloud
Affects Versions: 5.5
Reporter: Shawn Heisey
Priority: Minor


I was thinking about what somebody has to do if they want to replace a single 
file in a specific SolrCloud configuration.  This and other operations could be 
easier with some tweaks to the zkcli program.

I'd like to have some options to do things like the following, and other 
combinations not specifically stated here:

 * Upload the file named solrconfig.xml to the 'foo' config.
 * Upload the file named solrconfig.xml to the config used by the 'bar' 
collection.
 * Download the file named stopwords.txt from the config used by the 'bar' 
collection.
 * Rename schema.xml to managed-schema in the 'foo' config.
 * Delete archaic_stopwords.txt from the config used by the 'bar' collection.

When a config is changed, it would be a good idea for the program to print out 
a list of all collections affected by the change.  I can imagine a 
"-interactive" option that asks "are you sure" after printing the affected 
collection list, and a "-dry-run" option to print out that information without 
actually doing anything.  An alternative to the interactive option -- have the 
program prompt by default and implement a "-force" option to do it without 
prompting.

I wonder whether it would be a good idea to include an option to reload all 
affected collections after a change is made.  The script uses WEB-INF/lib on 
the classpath, so SolrJ should be available.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-19 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201966#comment-15201966
 ] 

Nicholas Knize commented on LUCENE-7118:


Nice! +1 for 6.0

> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199163#comment-15199163
 ] 

Alan Woodward commented on SOLR-8765:
-

I'll try and get to it tomorrow (UK time)

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl getTables() and fix getSchemas()

2016-03-19 Thread Trey Cahill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Cahill updated SOLR-8819:
--
Attachment: SOLR-8819.patch

> Implement DatabaseMetaDataImpl getTables() and fix getSchemas()
> ---
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch, 
> SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Splitting Solr artifacts so the main download is smaller

2016-03-19 Thread Shawn Heisey
I'd like to see some motion on this, which probably means I need to do
it myself.  I'd like to know who I can talk to about the build/packaging
system so I can find what needs to change, and especially so I don't
break it.

There's already a jira issue -- SOLR-6806, with some related bits in
SOLR-5103.

The Solr download for 5.5.0 is 130 or 138 megabytes, depending on what
OS you're going to install it on.  For the rest of this email, let's
focus on the .zip version (138MB), since my client is Windows and I'd
like to compare apples to apples.

We have a .zip download size of 138MB, which thankfully is down in size
since we completely dropped the war file.  That *other* search engine
based on Lucene has a .zip download size of 28MB.

I started fiddling with the download archive on my Windows machine,
pulling out obvious pieces at the root of the extracted archive, and
managed to get the .zip size down to 40MB.

If I dig further and remove the lucene-analyzers-kuromoji jar (over 4MB)
and the hadoop jars (10MB), which the majority of Solr's users will
*never* need, Solr 5.5's .zip file drops to 25MB.

I'm not suggesting that we just remove these pieces.  We would need to
have a main artifact and several supporting artifacts.  The total size
would be virtually the same, so the concerns in LUCENE-5589 and
LUCENE-6247 will not get worse.  They also won't get better.

There's plenty of opportunity for bikeshedding here, but that should be
done in Jira.  For this email, I'd like to know if anyone has strong
opposition to this, and if not, who would be willing to provide guidance
for how to do it right.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8842) security should use an API to expose the permission name instead of using HTTP params

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201170#comment-15201170
 ] 

ASF subversion and git services commented on SOLR-8842:
---

Commit faa0586b31d5644360646010ceaf530cbe227498 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=faa0586 ]

SOLR-8842: security rules made more foolproof by asking the requesthandler  
about the well known
  permission name.
  The APIs are also modified to ue 'index' as the unique 
identifier instead of name.
  Name is an optional attribute
  now and only to be used when specifying 
well-known permissions


> security should use an API to expose the permission name instead of using 
> HTTP params
> -
>
> Key: SOLR-8842
> URL: https://issues.apache.org/jira/browse/SOLR-8842
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: security
> Fix For: master, 6.1
>
> Attachments: SOLR-8842.patch, SOLR-8842.patch
>
>
> Currently the well-known permissions are using the HTTP atributes, such as 
> method, uri, params etc to identify the corresponding permission name such as 
> 'read', 'update' etc. Expose this value through an API so that it can be more 
> accurate and handle various versions of the API
> RequestHandlers will be able to implement an interface to provide the name
> {code}
> interface PermissionNameProvider {
>  Name getPermissionName(SolrQueryRequest req)
> }
> {code} 
> This means many significant changes to the API
> 1) {{name}} does not mean a set of http attributes. Name is decided by the 
> requesthandler . Which means it's possible to use the same name across 
> different permissions.  
> examples
> {code}
> {
> "permissions": [
> {//this permission applies to all collections
>   "name": "read",
>   "role": "dev"
> },
> {
>  
>  // this applies to only collection x. But both means you are hitting a 
> read type API
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev"
> }
>   ]
> }
> {code} 
> 2) so far we have been using the name as something unique. We use the name to 
> do an {{update-permission}} , {{delete-permission}} or even when you wish to 
> insert a permission before another permission we used to use the name. Going 
> forward it is not possible. Every permission will get an implicit index. 
> example
> {code}
> {
>   "permissions": [
> {
>   "name": "read",
>   "role": "dev",
>//this attribute is automatically assigned by the system
>   "index" : 1
> },
> {
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev",
>   "index" : 2
> }
>   ]
> }
> {code}
> 3) example update commands
> {code}
> {
>   "set-permission" : {
> "index": 2,
> "name": "read",
> "collection" : "x",
> "role" :["xdev","admin"]
>   },
>   //this deletes the permission at index 2
>   "delete-permission" : 2,
>   //this will insert the command before the first item
>   "set-permission": {
> "name":"config-edit",
> "role":"admin",
> "before":1
>   }
> }
> {code}
> 4) you could construct a  permission purely with http attributes and you 
> don't need any name for that. As expected, this will be appended atthe end of 
> the list of permissions
> {code}
> {
>   "set-permission": {
>  "collection": null,
>  "path":"/admin/collections",
>  "params":{"action":[LIST, CREATE]},
>  "role": "admin"}
> }
> {code}
> Users with existing configuration will not observe any change in behavior. 
> But the commands issued to manipulate the permissions will be different .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199673#comment-15199673
 ] 

Yonik Seeley commented on SOLR-8082:


bq. Do you mean that instead of using the DocValuesRangeQuery.newLongRange(), 
we write something on our own which converts the longs back to floats/doubles 
and then compares those floats/doubles?

Yeah, that's one way.
But maybe we should go ahead and fix ValueSourceRangeFilter to not match 
documents w/o a value in the field.  It's arguably a bug (and only existed 
historically because we didn't have info about what fields had a value for 
numerics, and didn't have exists()) and 6.0 is the perfect time to make the 
change.

bq. Do you think the NumericUtils.doubleToSortableLong() is a good choice for 
converting float/double to longs, instead of Double.doubleToLongBits() which is 
currently used?

Both have their advantages... while sortable longs might be convenient when 
operating in the "long" space, it would slow things down when converting back 
to a double.  


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7110) Add Shape Support to BKD (extend to an R*/X-Tree data structure)

2016-03-19 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7110:
---
Description: 
I've been tinkering with this off and on for a while and its showing some 
promise so I'm going to open an issue to (eventually) add this feature to 
either a 6.x or (more likely) a 7.x release.

R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
like the internal node, the key for each leaf node is the Minimum Bounding 
Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding Rectangle) 
of the shape. Inserting a shape then boils down to the best way of optimizing 
the tree structure. This optimization is driven by a set of criteria for 
choosing the appropriate internal key (e.g., minimizing overlap between 
siblings, maximizing "squareness", minimizing area, maximizing space usage). 
Query is then (a bit oversimplified) a two-phase process:
* recurse each branch that overlaps with the MBR of the query shape
* compute the relation with the leaf node(s) - in higher dimensions (3+) this 
becomes an increasingly difficult computational geometry problem

The current BKD implementation is a special simplified case of an R*/X tree 
where, for Point data, it is always guaranteed there will be no overlap between 
sibling nodes (because you're using the point data as the keys). By exploiting 
this property the tree algorithms (split, merge, etc) are relatively cheap 
(hence their performance boost over postings based numerics). By modifying the 
key data, and extending the tree generation algorithms BKD logic can be 
extended to support Shape data using the MBR as the Key and modifying split and 
merge based on the criteria needed for optimizing a shape-based data structure.

The initial implementation (based on limitations of the GeoAPI) will support 2D 
shapes only. Once the GeoAPI can performantly handle 3D shapes the change is 
relatively trivial to add the third dimension to the tree generation code.

Like everything else, this feature will be created in sandbox and, once mature, 
will graduate to lucene-spatial.

  was:
I've been tinkering with this off and on for a while and its showing some 
promise so I'm going to open an issue to (eventually) add this feature to 
either a 6.x or (more likely) a 7.x release.

R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
like the internal node, the key for each leaf node is the Minimum Bounding 
Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding Rectangle) 
of the shape. Inserting a shape then boils down to the best way of optimizing 
the tree structure. This optimization is driven by a set of criteria for 
choosing the appropriate internal key (e.g., minimizing overlap between 
siblings, maximizing "squareness", minimizing area, maximizing space usage). 
Query is then (a bit oversimplified) a two-phase process:
* recurse each branch that overlaps with the MBR of the query shape
* compute the relation with the leaf node(s) - in higher dimensions (3+) this 
becomes an increasingly difficult computational geometry problem
The current BKD implementation is a special simplified case of an R*/X tree 
where, for Point data, it is always guaranteed there will be no overlap between 
sibling nodes (because you're using the point data as the keys). By exploiting 
this property the tree algorithms (split, merge, etc) are relatively cheap 
(hence their performance boost over postings based numerics). By modifying the 
key data, and extending the tree generation algorithms BKD logic can be 
extended to support Shape data using the MBR as the Key and modifying split and 
merge based on the criteria needed for optimizing a shape-based data structure.

The initial implementation (based on limitations of the GeoAPI) will support 2D 
shapes only. Once the GeoAPI can performantly handle 3D shapes the change is 
relatively trivial to add the third dimension to the tree generation code.

Like everything else, this feature will be created in sandbox and, once mature, 
will graduate to lucene-spatial.


> Add Shape Support to BKD (extend to an R*/X-Tree data structure)
> 
>
> Key: LUCENE-7110
> URL: https://issues.apache.org/jira/browse/LUCENE-7110
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>
> I've been tinkering with this off and on for a while and its showing some 
> promise so I'm going to open an issue to (eventually) add this feature to 
> either a 6.x or (more likely) a 7.x release.
> R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
> like the internal node, the key for each leaf node is the Minimum Bounding 
> Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding 
> Rectangle) of 

[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200999#comment-15200999
 ] 

Mark Miller commented on SOLR-4509:
---

Actually this snuck in even before SSL, it was added in 4.0.  It was quietly 
added, but internal features did not count on it or use it. SSL only used it 
for test purposes later on. Security has bear hugged it though - it's how you 
configure security now and is part of a user plugin api. Bummer given the old 
deprecated HttpClient classes involved and the extra pain to move to 
preconfiguration. We should try and minimize exposing internal client API's as 
part of our API's to users. It really locks us in.

> Disable HttpClient stale check for performance.
> ---
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, master
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-03-19 Thread Caleb Rackliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated SOLR-8858:
--
 Flags: Patch
External issue URL: https://github.com/apache/lucene-solr/pull/21

I've posted a PR that fixes this in what I'm hoping is a reasonable way. I 
imagine the impact will mostly fall on custom {{StoredFieldsReader}} 
implementations.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Reporter: Caleb Rackliffe
>  Labels: easyfix
> Fix For: 5.5.1
>
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8082:
---
Attachment: SOLR-8082.patch

Updating the patch. This contains a randomized test, which I am currently 
beasting now.
This depends on a patch for a bug I found during testing this, LUCENE-7111.

If the beasting goes fine, I think this is a fix that behaves correctly. But, 
still not sure if this is the best fix to have, since there possibly exists 
another alternative (which I'll look into after this) to write the longs in 
sortable order itself.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8814) Support GeoJSON response format

2016-03-19 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197720#comment-15197720
 ] 

Ryan McKinley commented on SOLR-8814:
-

bq. In the test, it appears System.setProperty("enable.update.log", "false"); 
// schema12 doesn't support version is not needed since you don't use schema12

fixed -- thanks

bq. I suggest initializing the HashMap of the built-in transformers with the 
no-arg constructor (TransformerFactory.java), and same thing for the response 
writers (SolrCore.java). It's not worth it in trying in trying to optimize & 
maintain anything else. I realize you didn't introduce these but I suggest 
ending it now.

Lets open another issue if you care about this... i don't know enough to say, 
and don't want that discussion to get lost in this issue

bq. Personally I'd find it far easier to interpret the test if I was looking at 
the JSON string or toString'ed Map or whatever it is, versus the laborious 
extraction of each part of the data structure. If you disagree, leave it.

I think the tests have a good mix of this -- some are testing with strings and 
others are checking the direct element.  (Where parsing is important)

bq. GeoTransformerFactory.java doesn't compile for me; it references 
GeoJSONResponseWriter.FIELD which doesn't exist. The patch file itself seemed 
strange; seemed like a list of commits and not one patch. Maybe this is related.

sorry, my git patch was weird.  It was the 'patch' flavor, not the 'diff' flavor


> Support GeoJSON response format
> ---
>
> Key: SOLR-8814
> URL: https://issues.apache.org/jira/browse/SOLR-8814
> Project: Solr
>  Issue Type: New Feature
>  Components: Response Writers
>Reporter: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch
>
>
> With minor changes, we can modify the existing JSON writer to produce a 
> GeoJSON `FeatureCollection` for ever SolrDocumentList.  We can then pick a 
> field to use as the geometry type, and use that for the Feature#geometry
> {code}
> "response":{"type":"FeatureCollection","numFound":1,"start":0,"features":[
>   {"type":"Feature",
> "geometry":{"type":"Point","coordinates":[1,2]},
> "properties":{
>   ... the normal solr doc fields here ...}}]
>   }}
> {code}
> This will allow adding solr results directly to various mapping clients like 
> [Leaflet|http://leafletjs.com/]
> 
> This patch will work with Documents that have a spatial field the either:
> 1. Extends AbstractSpatialFieldType
> 2. has a stored value with geojson
> 2. has a stored value that can be parsed by spatial4j (WKT, etc)
> The spatial field is identified with the parameter `geojson.field`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+109) - Build # 151 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/151/
Java: 64bit/jdk-9-ea+109 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6201, 
name=testExecutor-3395-thread-12, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6201, name=testExecutor-3395-thread-12, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:38149/pvgiu/i
at __randomizedtesting.SeedInfo.seed([83186880356751AC]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:38149/pvgiu/i
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11345 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_83186880356751AC-001/init-core-data-001
   [junit4]   2> 716724 INFO  
(SUITE-UnloadDistributedZkTest-seed#[83186880356751AC]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/pvgiu/i
   [junit4]   2> 716725 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[83186880356751AC]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 716725 INFO  (Thread-2038) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 716725 INFO  (Thread-2038) [] 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16233 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16233/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu Mar 17 07:42:23 
GMT 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu Mar 17 07:42:23 GMT 2016
at 
__randomizedtesting.SeedInfo.seed([E7047B72BBDEF85:D5DB47712E958636]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1422)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'first' for 

[jira] [Updated] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-03-19 Thread Caleb Rackliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated SOLR-8858:
--
Affects Version/s: 4.10
   5.5

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
> Fix For: 5.5.1
>
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to

2016-03-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200233#comment-15200233
 ] 

Noble Paul edited comment on SOLR-8862 at 3/17/16 7:46 PM:
---

bq.ZkController.checkOverseerDesignate() is called (no idea what that does)

I probably should add a comment there. If an overseer designate is down and 
comes back up, it should be pushed ahead of non designates . So it sends a 
message to overseer to put it in the front of the overseer election queue


was (Author: noble.paul):
bq.ZkController.checkOverseerDesignate() is called (no idea what that does)

I probaly should add a comment there. If an overseer designate is down and 
comes back up, it should be pushed ahead of non designates . So it sends a 
message to overseer to put it in the front of the overseer election queue

> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197562#comment-15197562
 ] 

Mark Miller commented on SOLR-8742:
---

Does this still fail for you? Does not seem to be reproducing for me.

> HdfsDirectoryTest fails reliably after changes in LUCENE-6932
> -
>
> Key: SOLR-8742
> URL: https://issues.apache.org/jira/browse/SOLR-8742
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> the following seed fails reliably for me on master...
> {noformat}
>[junit4]   2> 1370568 INFO  
> (TEST-HdfsDirectoryTest.testEOF-seed#[A0D22782D87E1CE2]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending testEOF
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
> -Dtests.method=testEOF -Dtests.seed=A0D22782D87E1CE2 -Dtests.slow=true 
> -Dtests.locale=es-PR -Dtests.timezone=Indian/Mauritius -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.13s J0 | HdfsDirectoryTest.testEOF <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A0D22782D87E1CE2:31B9658A9A5ABA9E]:0)
>[junit4]>  at 
> org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> git bisect says this is the first commit where it started failing..
> {noformat}
> ddc65d977f920013c5fca16c8ac75ae2c6895f9d is the first bad commit
> commit ddc65d977f920013c5fca16c8ac75ae2c6895f9d
> Author: Michael McCandless 
> Date:   Thu Jan 21 17:50:28 2016 +
> LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond 
> the end of the file
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1726039 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}
> ...which seems remarkable relevant and likely to indicate a problem that 
> needs fixed in the HdfsDirectory code (or perhaps just the test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-19 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201525#comment-15201525
 ] 

Ryan Josal commented on SOLR-8812:
--

On the topic of SOLR-2649, I just upgraded to 5.5 yesterday and SOLR-2649 broke 
one of our test cases which was "hair ties -barbie" should return hair ties but 
not barbie hair ties, and now it matches nothing.  I assume this is intended, 
but if not, maybe this ticket also addresses it?

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+109) - Build # 16242 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16242/
Java: 64bit/jdk-9-ea+109 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([F7B3108D2AB1FA9C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)




Build Log:
[...truncated 11557 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/init-core-data-001
   [junit4]   2> 950255 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.SolrTestCaseJ4 ###Starting doTestStopPoll
   [junit4]   2> 950255 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/solr-instance-001/collection1
   [junit4]   2> 950257 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 950258 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@14de41d1{/solr,null,AVAILABLE}
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@62d7faa8{HTTP/1.1,[http/1.1]}{127.0.0.1:41997}
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.Server Started @952336ms
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/solr-instance-001/collection1/data,
 hostContext=/solr, hostPort=41997}
   [junit4]   2> 950260 INFO  

Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Alan Woodward
Welcome Kevin!

Alan Woodward
www.flax.co.uk


On 17 Mar 2016, at 09:27, Jan Høydahl wrote:

> Welcome Kevin!
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 16. mar. 2016 kl. 18.03 skrev David Smiley :
>> 
>> Welcome Kevin!
>> 
>> (corrected misspelling of your last name in the subject)
>> 
>> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation 
>> to become a committer.
>> 
>> Kevin, it's tradition that you introduce yourself with a brief bio.
>> 
>> I believe your account has been setup and karma has been granted so that you 
>> can add yourself to the committers section of the Who We Are page on the 
>> website:
>> .
>> 
>> Congratulations and welcome!
>> 
>> 
>> Joel Bernstein
>> 
>> -- 
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
>> http://www.solrenterprisesearchserver.com
> 



[jira] [Commented] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200367#comment-15200367
 ] 

ASF subversion and git services commented on SOLR-8867:
---

Commit c195395d34fb28711b99e4552602dcea729a718b in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c195395 ]

SOLR-8867: fix frange/FunctionValues.getRangeScorer to not match missing 
values, getRangeScorer refactored to take LeafReaderContext


> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8867.patch, SOLR-8867.patch
>
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8873) Enforce dataDir/instanceDir/ulogDir to be paths that contain only a controlled subset of characters

2016-03-19 Thread JIRA
Tomás Fernández Löbbe created SOLR-8873:
---

 Summary: Enforce dataDir/instanceDir/ulogDir to be paths that 
contain only a controlled subset of characters
 Key: SOLR-8873
 URL: https://issues.apache.org/jira/browse/SOLR-8873
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe


We currently support any valid path for dataDir/instanceDir/ulogDir. I think we 
should prevent special characters and restrict to a subset that is commonly 
used and tested.
My initial proposals it to allow the Java pattern: {code:java}"^[a-zA-Z0-9\\.\\ 
\\-_/\"':]+$"{code} but I'm open to suggestions. I'm not sure if there can 
be issues with HDFS paths (this pattern does pass the tests we currently have), 
or some other use case I'm not considering.
I also think our tests should use all those characters randomly. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198521#comment-15198521
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 44f9569d32a6b84126a91e39ddc598c374adeaab in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=44f9569 ]

SOLR-8838: Returning non-stored docValues is incorrect for negative floats and 
doubles.


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8877:

Description: 
The SolrCLI and the corresponding test use CommandLine.parse() of commons-exec, 
but in most cases the parameters are not correctly escaped.

CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
valid way to build a command line and execute it. The correct war is to create 
an instance of the CommandLine class and then add the arguments one-by one:

{code:java}
  org.apache.commons.exec.CommandLine startCmd = new 
org.apache.commons.exec.CommandLine(callScript);
  startCmd.addArguments(new String[] {
  "start",
  cloudModeArg,
  "-p",
  Integer.toString(port),
  "-s",
  solrHome,
  hostArg,
  zkHostArg,
  memArg,
  extraArgs,
  addlOptsArg
  });
{code}

I tried to fix it by using the approach, but the test then fails with other 
bugs on Windows. I disabled it for now if it detects whitespace in Solr's path. 
I think the reason might be that some of the above args are empty or are 
multi-args on itsself, so they get wrongly escaped.

I have no idea how to fix it, but the current way fails completely on Windows, 
where most users have a whitespace in their home directory or in the 
"C:\Program Files" folder.

  was:
The SolrCLI and the corresponding test use CommandLine.parse() of commons-exec, 
but in most cases the parameters are not correctly escaped.

CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
valid way to build a command line and execute it. The correct war is to create 
an instance of the CommandLine class and then add the arguments one-by one:

{code:java}
  org.apache.commons.exec.CommandLine startCmd = new 
org.apache.commons.exec.CommandLine(callScript);
  startCmd.addArguments(new String[] {
  "start",
  callScript,
  "-p",
  Integer.toString(port),
  "-s",
  solrHome,
  hostArg,
  zkHostArg,
  memArg,
  extraArgs,
  addlOptsArg
  });
{code}

I tried to fix it by using the approach, but the test then fails with other 
bugs on Windows. I disabled it for now if it detects whitespace in Solr's path. 
I think the reason might be that some of the above args are empty or are 
multi-args on itsself, so they get wrongly escaped.

I have no idea how to fix it, but the current way fails completely on Windows, 
where most users have a whitespace in their home directory or in the 
"C:\Program Files" folder.


> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   cloudModeArg,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198700#comment-15198700
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit ae846bfb492fd91e30daac017c6587083e278236 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae846bf ]

SOLR-8860: Remove back-compat handling of router format made in SOLR-4221 in 
4.5.0


> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, master
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201502#comment-15201502
 ] 

Ishan Chattopadhyaya commented on SOLR-8082:


Thanks Yonik, the patch looks good. The test is passing for me.
Do you think we should rely on FunctionRangeQuery for the entire number line, 
or should we just use this for the negative range? To me, both looked similar 
in terms of performance.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-19 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202859#comment-15202859
 ] 

Dawid Weiss commented on LUCENE-7122:
-

I'd say do create a separate class... the scenario of all values having exactly 
the same length is kind of exceptional -- it'll make for a very clean logic in 
a separate class (no conditional jumps too) and it'll leave {{BytesRefArray}} 
much cleaner for the eyes.

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-19 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7122:
---
Attachment: LUCENE-7122.patch

Patch, changing {{BytesRefArray}} to lazily create the {{int[] offsets}} only 
when it sees that they are different across values.  We could alternatively 
make a separate class ({{FixedWidthBytesRefArray}}?) but I think we have too 
many of these already...


> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-19 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7122:
--

 Summary: BytesRefArray can be more efficient for fixed width values
 Key: LUCENE-7122
 URL: https://issues.apache.org/jira/browse/LUCENE-7122
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master, 6.1


Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
value to hold the length, but for dimensional points these values are
always the same length. 

This can save another 4 bytes of heap per indexed dimensional point,
which is a big improvement (more points can fit in heap at once) for
1D and 2D lat/lon points.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 82 - Failure

2016-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/82/

1 tests failed.
FAILED:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([4CDEBE095B0BC135:24C0BDE5B99B9B7B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.verifyReplicaStatus(AbstractDistribZkTestBase.java:234)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1271)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+109) - Build # 16260 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16260/
Java: 64bit/jdk-9-ea+109 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

32 tests failed.
FAILED:  
org.apache.lucene.codecs.blockterms.TestFixedGapPostingsFormat.testDocsAndFreqs

Error Message:
Unable to unmap the mapped buffer: 
MMapIndexInput(path="/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/codecs/test/J1/temp/lucene.codecs.blockterms.TestFixedGapPostingsFormat_A0746EF7DD72CD8F-001/testPostingsFormat.testExact-003/_0_LuceneFixedGap_0.doc")

Stack Trace:
java.io.IOException: Unable to unmap the mapped buffer: 
MMapIndexInput(path="/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/codecs/test/J1/temp/lucene.codecs.blockterms.TestFixedGapPostingsFormat_A0746EF7DD72CD8F-001/testPostingsFormat.testExact-003/_0_LuceneFixedGap_0.doc")
at 
__randomizedtesting.SeedInfo.seed([A0746EF7DD72CD8F:B7BB5470DB18ADA9]:0)
at 
org.apache.lucene.store.MMapDirectory.lambda$unmapHackImpl$1(MMapDirectory.java:384)
at 
org.apache.lucene.store.ByteBufferIndexInput.freeBuffer(ByteBufferIndexInput.java:376)
at 
org.apache.lucene.store.ByteBufferIndexInput.close(ByteBufferIndexInput.java:355)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2695)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:737)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81)
at 
org.apache.lucene.codecs.blockterms.LuceneFixedGap.fieldsProducer(LuceneFixedGap.java:96)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:261)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:341)
at 
org.apache.lucene.index.RandomPostingsTester.buildIndex(RandomPostingsTester.java:680)
at 
org.apache.lucene.index.RandomPostingsTester.testFull(RandomPostingsTester.java:1253)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.testDocsAndFreqs(BasePostingsFormatTestCase.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-8869) Optionally disable printing field cache entries in JmxMonitoredMap

2016-03-19 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8869:


 Summary: Optionally disable printing field cache entries in 
JmxMonitoredMap
 Key: SOLR-8869
 URL: https://issues.apache.org/jira/browse/SOLR-8869
 Project: Solr
  Issue Type: Improvement
Affects Versions: 6.1, trunk
Reporter: Gregory Chanan
Assignee: Gregory Chanan


Even with SOLR-6747, we are seeing some pretty load / memory allocation due to 
the JmxMonitoredMap.  A majority of this seems to be printing the field cache 
entries.  We should allow admins to disable printing the field cache entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3955) Return only matched multiValued field

2016-03-19 Thread Eric Schoen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198479#comment-15198479
 ] 

Eric Schoen commented on SOLR-3955:
---

This would also be helpful for applications in which [hierarchical 
faceting|https://wiki.apache.org/solr/HierarchicalFaceting] needs to be 
combined with a hierarchy search function. (For example, when using a 
Javascript component such as jstree to incrementally drill down in to a large 
collection of facet values, while offering the ability to search for facet 
values as well.)

> Return only matched multiValued field
> -
>
> Key: SOLR-3955
> URL: https://issues.apache.org/jira/browse/SOLR-3955
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.0
>Reporter: Dotan Cohen
>  Labels: features
>
> Assuming a multivalued, stored and indexed field named "comment". When 
> performing a search, it would be very helpful if there were a way to return 
> only the values of "comment" which contain the match. For example:
> When searching for "gold" instead of getting this result:
> 
> 
> Theres a lady whos sure
> all that glitters is gold
> and shes buying a stairway to heaven
> 
> 
> I would prefer to get this result:
> 
> 
> all that glitters is gold
> 
> 
> (psuedo-XML from memory, may not be accurate but illustrates the point)
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8868) SolrCloud: if zookeeper loses and then regains a quorum Solr and SolrJ Client still need to be restarted

2016-03-19 Thread Frank J Kelly (JIRA)
Frank J Kelly created SOLR-8868:
---

 Summary: SolrCloud: if zookeeper loses and then regains a quorum 
Solr and SolrJ Client still need to be restarted
 Key: SOLR-8868
 URL: https://issues.apache.org/jira/browse/SOLR-8868
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, SolrJ
Affects Versions: 5.3.1
Reporter: Frank J Kelly


Tried mailing list on 3/15 and 3/16 to no avail. Hopefully I gave enough 
details.


Just wondering if my observation of SolrCloud behavior after ZooKeeper loses a 
quorum is normal or to-be-expected

Version of Solr: 5.3.1
Version of ZooKeeper: 3.4.7
Using SolrCloud with external ZooKeeper
Deployed on AWS

Our Solr cluster has 3 nodes (m3.large)

Our Zookeeper ensemble consists of three nodes (t2.small) with the same config 
using DNS names e.g.
{noformat}
$ more ../conf/zoo.cfg
tickTime=2000
dataDir=/var/zookeeper
dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=10
syncLimit=5
standaloneEnabled=false
server.1=zookeeper1.qa.eu-west-1.mysearch.com:2888:3888
server.2=zookeeper2.qa.eu-west-1.mysearch.com:2888:3888
server.3=zookeeper3.qa.eu-west-1.mysearch.com:2888:3888
{noformat}

If we terminate one of the zookeeper nodes we get a ZK election (and I think) a 
quorum is maintained.
Operation continues OK and we detect the terminated instance and relaunch a new 
ZK node which comes up fine

If we terminate two of the ZK nodes we lose a quorum and then we observe the 
following

1.1) Admin UI shows an error that it is unable to contact ZooKeeper “Could not 
connect to ZooKeeper"

1.2) SolrJ returns the following
{noformat}
org.apache.solr.common.SolrException: Could not load collection from 
ZK:qa_eu-west-1_public_index
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
at 
com.here.scbe.search.solr.SolrFacadeImpl.addToSearchIndex(SolrFacadeImpl.java:112)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for 
/collections/qa_eu-west-1_public_index/state.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:841)
... 24 more
{noformat}

This makes sense based on our understanding.
When our AutoScale groups launch two new ZooKeeper nodes, initialize them, fix 
the DNS etc. we regain a quorum but at this point

2.1) Admin UI shows the shards as “GONE” (all greyed out)

2.2) SolrJ returns the same error even though the ZooKeeper DNS names are now 
bound to new IP addresses

So at this point I restart the Solr nodes. At this point then

3.1) Admin UI shows the collections as OK (all shards are green) – yeah the 
nodes are back!

3.2) SolrJ Client still shows the same error – namely
{noformat}
org.apache.solr.common.SolrException: Could not load collection from 
ZK:qa_eu-west-1_here_account
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.deleteById(SolrClient.java:825)
at org.apache.solr.client.solrj.SolrClient.deleteById(SolrClient.java:788)
at org.apache.solr.client.solrj.SolrClient.deleteById(SolrClient.java:803)
at com.here.scbe.search.solr.SolrFacadeImpl.deleteById(SolrFacadeImpl.java:257)
.
.
Caused by: 

[jira] [Commented] (LUCENE-7109) LatLonPoint newPolygonQuery should use two-phase iterator

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197307#comment-15197307
 ] 

ASF subversion and git services commented on LUCENE-7109:
-

Commit 6ea458a0edaa4b2e30a2c31dcb703350ee3936c1 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ea458a ]

LUCENE-7109: LatLonPoint.newPolygonQuery should use two-phase iterator


> LatLonPoint newPolygonQuery should use two-phase iterator
> -
>
> Key: LUCENE-7109
> URL: https://issues.apache.org/jira/browse/LUCENE-7109
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7109.patch
>
>
> Currently, the calculation this thing does is very expensive, and gets slower 
> the more complex the polygon is. Doing everything in one phase is really bad 
> for performance.
> Later, there are a lot of optimizations we can do. But I think we should try 
> to beef up testing first. This is just to improve from 
> galapagos-tortoise-slow to turtle-slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7106) Make it easy to have cumulated point stats across segments

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197457#comment-15197457
 ] 

ASF subversion and git services commented on LUCENE-7106:
-

Commit 4e0d8355f087baa04522d0fe21453cfe5f237128 in lucene-solr's branch 
refs/heads/branch_6_0 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e0d835 ]

LUCENE-7106: Add helpers to compute aggregated stats on points.


> Make it easy to have cumulated point stats across segments
> --
>
> Key: LUCENE-7106
> URL: https://issues.apache.org/jira/browse/LUCENE-7106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7106.patch
>
>
> For other parts of the index, it is easy thanks to the 
> Multi\{Terms,DocValues,...\} classes. However, we don't have such a thing for 
> points but it would still be nice to have a convenient way to compute eg. the 
> max value of a field on a whole index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Jan Høydahl
Welcome Kevin!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 16. mar. 2016 kl. 18.03 skrev David Smiley :
> 
> Welcome Kevin!
> 
> (corrected misspelling of your last name in the subject)
> 
> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  > wrote:
> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation 
> to become a committer.
> 
> Kevin, it's tradition that you introduce yourself with a brief bio.
> 
> I believe your account has been setup and karma has been granted so that you 
> can add yourself to the committers section of the Who We Are page on the 
> website:
>  >.
> 
> Congratulations and welcome!
> 
> 
> Joel Bernstein
> 
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley 
>  | Book: 
> http://www.solrenterprisesearchserver.com 
> 


[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+109) - Build # 16240 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16240/
Java: 32bit/jdk-9-ea+109 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2080, 
name=testExecutor-1009-thread-6, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2080, name=testExecutor-1009-thread-6, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:32952
at __randomizedtesting.SeedInfo.seed([8438CDD47CDC214F]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:32952
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 10892 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_8438CDD47CDC214F-001/init-core-data-001
   [junit4]   2> 217543 INFO  
(SUITE-UnloadDistributedZkTest-seed#[8438CDD47CDC214F]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 217544 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[8438CDD47CDC214F]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 217545 INFO  (Thread-708) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 217545 INFO  (Thread-708) [] o.a.s.c.ZkTestServer Starting 
server
   

[jira] [Updated] (LUCENE-7121) BKDWriter should not store ords when documents are single valued

2016-03-19 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7121:
---
Attachment: LUCENE-7121.patch

Patch.

> BKDWriter should not store ords when documents are single valued
> 
>
> Key: LUCENE-7121
> URL: https://issues.apache.org/jira/browse/LUCENE-7121
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7121.patch
>
>
> Since we now have stats for points fields, it's easy to know up front whether 
> the field you are about to build a BKD tree for is single valued or not.
> If it is single valued, we can optimize space by not storing the ordinal to 
> identify a point, since its docID also uniquely identifies it.
> This saves 4 bytes per point, which for the 1D case is non-trivial (12 bytes 
> down to 8 bytes per doc), and even for the 2D case is good reduction (16 
> bytes down to 12 bytes per doc).
> This is an optimization ... I won't push it into 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7121) BKDWriter should not store ords when documents are single valued

2016-03-19 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7121:
--

 Summary: BKDWriter should not store ords when documents are single 
valued
 Key: LUCENE-7121
 URL: https://issues.apache.org/jira/browse/LUCENE-7121
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master, 6.1


Since we now have stats for points fields, it's easy to know up front whether 
the field you are about to build a BKD tree for is single valued or not.

If it is single valued, we can optimize space by not storing the ordinal to 
identify a point, since its docID also uniquely identifies it.

This saves 4 bytes per point, which for the 1D case is non-trivial (12 bytes 
down to 8 bytes per doc), and even for the 2D case is good reduction (16 bytes 
down to 12 bytes per doc).

This is an optimization ... I won't push it into 6.0.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201310#comment-15201310
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 0412be5d6a469c38b5aa824cda6aea2014a2732a in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0412be5 ]

SOLR-8029: More specs


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: master
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7119) enable bypassing docValues check in DocTermOrds

2016-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-7119:
-
Attachment: LUCENE-7119.patch

Simple patch... adds a protected boolean that subclasses will be able to 
change.  This is really just for Solr/UnInvertedField - I doubt anyone else is 
going to be subclassing DocTermOrds.

> enable bypassing docValues check in DocTermOrds
> ---
>
> Key: LUCENE-7119
> URL: https://issues.apache.org/jira/browse/LUCENE-7119
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: LUCENE-7119.patch
>
>
> Currently, DocTermOrds refuses to build if doc values have been enabled for a 
> field.  While good for catching bugs, this disabled what can be legitimate 
> use cases (such as just trying out an alternate method w/o having to 
> re-configure and re-index, or even using consistently in conjunction with 
> UninvertingReader).  We should restore the ability to use this class in other 
> scenarios via adding a flag to bypass the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7120) Improve BKDWriter's checksum verification

2016-03-19 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7120:
---
Attachment: LUCENE-7120.patch

Relatively simple patch, adding a new {{getSharedReader}} method to 
{{PointWriter}}, so that the sharing is explicit as we recurse.

We still eagerly close the shared readers.

> Improve BKDWriter's checksum verification
> -
>
> Key: LUCENE-7120
> URL: https://issues.apache.org/jira/browse/LUCENE-7120
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7120.patch
>
>
> The checksum verification only works when {{BKDWriter}} fully reads one of 
> its temp files, but today it opens a reader, seeks to one slice, reads that, 
> and closes.
> But it turns out, on the full recursion from any given node in the tree, a 
> given file is read once, fully, so if we just share the readers, then we can 
> get checksum verification for these files as well.
> This is a non-trivial change ... I don't plan on pushing it for 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202802#comment-15202802
 ] 

ASF subversion and git services commented on SOLR-8877:
---

Commit a254c24ee22067a714d2f85cf56ca9c79fd64d8f in lucene-solr's branch 
refs/heads/branch_6_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a254c24 ]

SOLR-8877: Disable test on environments with whitespace


> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   callScript,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202799#comment-15202799
 ] 

ASF subversion and git services commented on SOLR-8877:
---

Commit e3b7d82825715a2162928c66d1c8e5e0133f7227 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3b7d82 ]

SOLR-8877: Disable test on environments with whitespace


> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   callScript,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202800#comment-15202800
 ] 

ASF subversion and git services commented on SOLR-8877:
---

Commit 4d20feeeae504b1d4acba2214d7e25df40239c64 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4d20fee ]

SOLR-8877: Disable test on environments with whitespace


> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   callScript,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7120) Improve BKDWriter's checksum verification

2016-03-19 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7120:
--

 Summary: Improve BKDWriter's checksum verification
 Key: LUCENE-7120
 URL: https://issues.apache.org/jira/browse/LUCENE-7120
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master, 6.1


The checksum verification only works when {{BKDWriter}} fully reads one of its 
temp files, but today it opens a reader, seeks to one slice, reads that, and 
closes.

But it turns out, on the full recursion from any given node in the tree, a 
given file is read once, fully, so if we just share the readers, then we can 
get checksum verification for these files as well.

This is a non-trivial change ... I don't plan on pushing it for 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8877:

Attachment: SOLR-8877.patch

My attempt to fix it. This causes other issues in the test...

> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   callScript,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 17 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/17/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudDeleteByQuery

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestCloudDeleteByQuery: 1) Thread[id=4789, 
name=OverseerHdfsCoreFailoverThread-95577570123120654-127.0.0.1:64674_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudDeleteByQuery: 
   1) Thread[id=4789, 
name=OverseerHdfsCoreFailoverThread-95577570123120654-127.0.0.1:64674_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([B9CE80FE19ED2772]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudDeleteByQuery

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4789, 
name=OverseerHdfsCoreFailoverThread-95577570123120654-127.0.0.1:64674_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4789, 
name=OverseerHdfsCoreFailoverThread-95577570123120654-127.0.0.1:64674_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([B9CE80FE19ED2772]:0)


FAILED:  org.apache.solr.logging.TestLogWatcher.testLog4jWatcher

Error Message:
expected:<11> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<11> but was:<1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.logging.TestLogWatcher.testLog4jWatcher(TestLogWatcher.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:243)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:354)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:10)




Build Log:
[...truncated 10830 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudDeleteByQuery
   [junit4]   2> Creating dataDir: 

[jira] [Created] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread Uwe Schindler (JIRA)
Uwe Schindler created SOLR-8877:
---

 Summary: SolrCLI.java and corresponding test does not work with 
whitespace in path
 Key: SOLR-8877
 URL: https://issues.apache.org/jira/browse/SOLR-8877
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.5, 6.0
Reporter: Uwe Schindler


The SolrCLI and the corresponding test use CommandLine.parse() of commons-exec, 
but in most cases the parameters are not correctly escaped.

CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
valid way to build a command line and execute it. The correct war is to create 
an instance of the CommandLine class and then add the arguments one-by one:

{code:java}
  org.apache.commons.exec.CommandLine startCmd = new 
org.apache.commons.exec.CommandLine(callScript);
  startCmd.addArguments(new String[] {
  "start",
  callScript,
  "-p",
  Integer.toString(port),
  "-s",
  solrHome,
  hostArg,
  zkHostArg,
  memArg,
  extraArgs,
  addlOptsArg
  });
{code}

I tried to fix it by using the approach, but the test then fails with other 
bugs on Windows. I disabled it for now if it detects whitespace in Solr's path. 
I think the reason might be that some of the above args are empty or are 
multi-args on itsself, so they get wrongly escaped.

I have no idea how to fix it, but the current way fails completely on Windows, 
where most users have a whitespace in their home directory or in the 
"C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8866) UpdateLog should throw an exception when serializing unknown types

2016-03-19 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8866:
---
Summary: UpdateLog should throw an exception when serializing unknown types 
 (was: JavaBinCodec should throw an exception when serializing unknown types)

> UpdateLog should throw an exception when serializing unknown types
> --
>
> Key: SOLR-8866
> URL: https://issues.apache.org/jira/browse/SOLR-8866
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_8866_UpdateLog_show_throw_for_unknown_types.patch
>
>
> When JavaBinCodec encounters a class it doesn't have explicit knowledge of 
> how to serialize, nor does it implement the {{ObjectResolver}} interface, it 
> currently serializes the object as the classname, colon, then toString() of 
> the object.
> This may appear innocent but _not_ throwing an exception hides bugs.  One 
> example is that the UpdateLog, which uses JavaBinCodec, to save a document.  
> The result is that this bad value winds up there, gets deserialized as a 
> String in PeerSync (which uses /get) and then this value pretends to be a 
> suitable value to the final document in the leader.  But of course it isn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-6.x - Build # 65 - Failure

2016-03-19 Thread Chris Hostetter

Weirdness...

https://issues.apache.org/jira/browse/SOLR-8864


: Date: Wed, 16 Mar 2016 21:39:31 + (UTC)
: From: Apache Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-Tests-6.x - Build # 65 - Failure
: 
: Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/65/
: 
: 1 tests failed.
: FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestCloudDeleteByQuery
: 
: Error Message:
: expected:<2> but was:<1>
: 
: Stack Trace:
: java.lang.AssertionError: expected:<2> but was:<1>
:   at __randomizedtesting.SeedInfo.seed([F6D0A21946A344B8]:0)
:   at org.junit.Assert.fail(Assert.java:93)
:   at org.junit.Assert.failNotEquals(Assert.java:647)
:   at org.junit.Assert.assertEquals(Assert.java:128)
:   at org.junit.Assert.assertEquals(Assert.java:472)
:   at org.junit.Assert.assertEquals(Assert.java:456)
:   at 
org.apache.solr.cloud.TestCloudDeleteByQuery.createMiniSolrCloudCluster(TestCloudDeleteByQuery.java:173)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:497)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
:   at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
:   at java.lang.Thread.run(Thread.java:745)
: 
: 
: 
: 
: Build Log:
: [...truncated 11191 lines...]
:[junit4] Suite: org.apache.solr.cloud.TestCloudDeleteByQuery
:[junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.cloud.TestCloudDeleteByQuery_F6D0A21946A344B8-001/init-core-data-001
:[junit4]   2> 514609 INFO  
(SUITE-TestCloudDeleteByQuery-seed#[F6D0A21946A344B8]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
:[junit4]   2> 514612 INFO  
(SUITE-TestCloudDeleteByQuery-seed#[F6D0A21946A344B8]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
:[junit4]   2> 514612 INFO  (Thread-2597) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
:[junit4]   2> 514612 INFO  (Thread-2597) [] o.a.s.c.ZkTestServer 
Starting server
:[junit4]   2> 514712 INFO  
(SUITE-TestCloudDeleteByQuery-seed#[F6D0A21946A344B8]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:33558
:[junit4]   2> 514713 INFO  
(SUITE-TestCloudDeleteByQuery-seed#[F6D0A21946A344B8]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
:[junit4]   2> 514713 INFO  
(SUITE-TestCloudDeleteByQuery-seed#[F6D0A21946A344B8]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
:[junit4]   2> 514716 INFO  (zkCallback-667-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 

  1   2   3   4   >