[JENKINS] Lucene-Solr-Tests-master - Build # 1100 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1100/

2 tests failed.
FAILED:  org.apache.solr.update.processor.TestNamedUpdateProcessors.test

Error Message:
Could not find collection:.system

Stack Trace:
java.lang.AssertionError: Could not find collection:.system
at 
__randomizedtesting.SeedInfo.seed([787B0C1336A6608D:F02F33C9985A0D75]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:150)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:130)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:852)
at 
org.apache.solr.update.processor.TestNamedUpdateProcessors.test(TestNamedUpdateProcessors.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 2 - Failure

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/2/

No tests ran.

Build Log:
[...truncated 39769 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.0-src.tgz...
   [smoker] 28.7 MB in 0.03 sec (828.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.tgz...
   [smoker] 63.4 MB in 0.19 sec (329.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.zip...
   [smoker] 73.9 MB in 0.09 sec (865.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker]   Backcompat testing not required for release 6.0.0 because 
it's not less than 5.5.0
   [smoker]   Backcompat testing not required for release 5.5.0 because 
it's not less than 5.5.0
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (24.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.0-src.tgz...
   [smoker] 37.5 MB in 0.79 sec (47.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.tgz...
   [smoker] 130.4 MB in 1.98 sec (65.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.zip...
   [smoker] 138.3 MB in 2.51 sec (55.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 

[jira] [Commented] (SOLR-8789) CollectionAPISolrJTests is not run when running ant test

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255025#comment-15255025
 ] 

ASF subversion and git services commented on SOLR-8789:
---

Commit 1fb79c94b182e0fbe4c5f90033cd6d34033c773a in lucene-solr's branch 
refs/heads/branch_5x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1fb79c9 ]

SOLR-8789: Remove the *Tests regular expression from the build xml, and instead 
rename CollectionsAPISolrJTests to CollectionsAPISolrJTest


> CollectionAPISolrJTests is not run when running ant test
> 
>
> Key: SOLR-8789
> URL: https://issues.apache.org/jira/browse/SOLR-8789
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8789.patch
>
>
> The pattern that is used to run the tests on Jenkins (ant test) is (from 
> lucene/common-build.xml) :
> {code}
> 
> 
> {code}
> CollectionAPISolrJTests ends in an extra 's' and so is not executed. We need 
> to either fix the pattern or the test name to make sure that this test is run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8804) Race condition in ClusterStatus.getClusterStatus

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255026#comment-15255026
 ] 

ASF subversion and git services commented on SOLR-8804:
---

Commit bf8a2c7caa2350e4764a0791cbee0c6764995e76 in lucene-solr's branch 
refs/heads/branch_5x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf8a2c7 ]

SOLR-8804: Fix a race condition in the ClusterStatus API call


> Race condition in ClusterStatus.getClusterStatus
> 
>
> Key: SOLR-8804
> URL: https://issues.apache.org/jira/browse/SOLR-8804
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Alexey Serba
>Assignee: Varun Thacker
>Priority: Trivial
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8804.patch, SOLR-8804.patch
>
>
> Reading cluster state information using {{/collections?action=CLUSTERSTATUS}} 
> can fail if there's a concurrent {{/collections?action=DELETE}} operation.
> The code in {{ClusterStatus.getClusterStatus}} 
> # gets collection names
> # for every collection reads its cluster state info using 
> {{ClusterState.getCollection}}
> The problem is that if there's a {{DELETE}} operation in between then 
> {{ClusterState.getCollection}} can fail thus causing the whole operation to 
> fail. It seems that it would be better to call 
> {{ClusterState.getCollectionOrNull}} and skip/ignore that collection if the 
> result is null.
> {noformat}
> 19:49:32.479 [qtp1531448569-881] ERROR org.apache.solr.core.SolrCore - 
> org.apache.solr.common.SolrException: Could not find collection : collection
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:165)
> at 
> org.apache.solr.handler.admin.ClusterStatus.getClusterStatus(ClusterStatus.java:110)
> at 
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$19.call(CollectionsHandler.java:614)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255030#comment-15255030
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 33462956c6e2e1cc1d23afeb947c9688f00ba490 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3346295 ]

SOLR-8838: java8 date handling -> java7


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8789) CollectionAPISolrJTests is not run when running ant test

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255024#comment-15255024
 ] 

ASF subversion and git services commented on SOLR-8789:
---

Commit 87f99019acd8a82756ee930a1b088f9a514e0f75 in lucene-solr's branch 
refs/heads/branch_5x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=87f9901 ]

SOLR-8789: Fix common-build.xml to run tests in classes that end in *Tests.java


> CollectionAPISolrJTests is not run when running ant test
> 
>
> Key: SOLR-8789
> URL: https://issues.apache.org/jira/browse/SOLR-8789
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8789.patch
>
>
> The pattern that is used to run the tests on Jenkins (ant test) is (from 
> lucene/common-build.xml) :
> {code}
> 
> 
> {code}
> CollectionAPISolrJTests ends in an extra 's' and so is not executed. We need 
> to either fix the pattern or the test name to make sure that this test is run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8790) Add node name back to the core level responses in OverseerMessageHandler

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255023#comment-15255023
 ] 

ASF subversion and git services commented on SOLR-8790:
---

Commit cbb29d2a5ae4ab8741aab6c9f0806d4236c0cad0 in lucene-solr's branch 
refs/heads/branch_5x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cbb29d2 ]

SOLR-8790: Add the node name to core responses in calls from the Overseer


> Add node name back to the core level responses in OverseerMessageHandler
> 
>
> Key: SOLR-8790
> URL: https://issues.apache.org/jira/browse/SOLR-8790
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8790-followup.patch, SOLR-8790.patch
>
>
> Continuing from SOLR-8789, now that this test runs, time to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255029#comment-15255029
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 496a7535115c13c25fe6c12b7f463477c9426098 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=496a753 ]

SOLR-8838: Remove obsolete comment


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255028#comment-15255028
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 404a3f995673d54bc7565dea934332e1cd37d4c3 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=404a3f9 ]

SOLR-8838: Returning non-stored docValues is incorrect for negative floats and 
doubles.


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255027#comment-15255027
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit db853d4983f3d7b6b55f70f3025444e733d44250 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db853d4 ]

SOLR-8838: Returning non-stored docValues is incorrect for negative floats and 
doubles.


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 161 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/161/

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([BD62CC45597DEBC2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestCoreDiscovery: 
1) Thread[id=58658, name=searcherExecutor-5293-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)   
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestCoreDiscovery: 
   1) Thread[id=58658, name=searcherExecutor-5293-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([BD62CC45597DEBC2]:0)



Re: Jira Spam - And changes made as a result.

2016-04-22 Thread Ryan Josal
Thanks Anshum!  And yeah, a whitelist like that makes sense to me too.

On Friday, April 22, 2016, Ishan Chattopadhyaya 
wrote:

> Btw, how about whitelisting everyone who has commented (a non-spam
> comment) at a Lucene/Solr issue before?
>
> On Sat, Apr 23, 2016 at 6:13 AM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com
> > wrote:
>
>> Anshum, please add me as well. Thanks.
>>
>>
>> On Sat, Apr 23, 2016 at 6:01 AM, Anshum Gupta > > wrote:
>>
>>> Hi Ryan,
>>>
>>> I've added you to the contributors group. You should be able to comment
>>> on JIRAs now.
>>>
>>> On Thu, Apr 21, 2016 at 8:51 PM, Ryan Josal >> > wrote:
>>>
 Woah, yeah, I have filed a few bugs as well as posted patches and
 comments.  Indeed I don't seem to be able to comment anymore.  Anyone
 want to add me (rjosal) to a role that can comment or create?

 Ryan


 On Thursday, April 21, 2016, David Smiley > wrote:

> Wow!  My reading of this is that the general public (i.e. not
> committers) won't be able to really do anything other than view JIRA 
> issues
> unless we expressly add individuals to a specific project group?  :-(
>  Clearly that sucks big time.  Is anyone reading this differently?
> Assuming this is true... at this point maybe there is nothing to do but
> wait until the inevitable requests come in for people to create/comment.
> Maybe send a message to the user lists?
>
> ~ David
>
> -- Forwarded message -
> From: Gav 
> Date: Fri, Apr 22, 2016 at 12:14 AM
> Subject: Jira Spam - And changes made as a result.
> To: infrastruct...@apache.org Infrastructure <
> infrastruct...@apache.org>
>
>
> Hi All,
>
> Apologies for notifying you after the fact.
>
> Earlier today (slowing down to a halt about 1/2 hr ago due to our
> changes) we had a
> big Spam attack directed at the ASF Jira instance.
>
> Many project were affected, including :-
>
> TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
> AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .
>
> During the process we ended up banning 27 IP addresses , deleted well
> over 200 tickets, and about 2 dozen user accounts.
>
> The spammers were creating accounts using the normal system and going
> through the required captchas.
>
> In addition to the ban hammer and deletions and to prevent more spam
> coming in, we changed the 'Default Permissions Scheme' so that anyone in
> the 'jira-users' group are no longer allowed to 'Create' tickets and are 
> no
> longer allowed to 'Comment' on any tickets.
>
> Obviously that affects genuine users as well as the spammers, we know
> that.
>
> Replacement auth instead of jira-users group now includes allowing
> those in the 'Administrator, PMC, Committer, Contributor and Developer'
> ROLES in jira.
>
> Projects would you please assist in making this work - anyone that is
> not in any of those roles for your project; and needs access to be able to
> create issues and comment, please do add their jira id to one of the
> available roles. (Let us know if you need assistance in this area)
>
> This is a short term solution. For the medium to long term we are
> working on providing LDAP authentication for Jira and Confluence through
> Atlassian Crowd (likley).
>
> If any projects are still being affected, please notify us as you may
> be using another permissions scheme to the one altered. Notify us via 
> INFRA
> jira ticket or reply to this mail to infrastruct...@apache.org or
> join us on hipchat (https://www.hipchat.com/gIjVtYcNy)
>
> Any project seriously adversely impacted by our changes please do come
> talk to us and we'll see what we can work out.
>
> Thanks all for your patience and understanding.
>
> Gav... (ASF Infra)
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>

>>>
>>>
>>> --
>>> Anshum Gupta
>>>
>>
>>
>


Re: Jira Spam - And changes made as a result.

2016-04-22 Thread Ishan Chattopadhyaya
Btw, how about whitelisting everyone who has commented (a non-spam comment)
at a Lucene/Solr issue before?

On Sat, Apr 23, 2016 at 6:13 AM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Anshum, please add me as well. Thanks.
>
>
> On Sat, Apr 23, 2016 at 6:01 AM, Anshum Gupta 
> wrote:
>
>> Hi Ryan,
>>
>> I've added you to the contributors group. You should be able to comment
>> on JIRAs now.
>>
>> On Thu, Apr 21, 2016 at 8:51 PM, Ryan Josal  wrote:
>>
>>> Woah, yeah, I have filed a few bugs as well as posted patches and
>>> comments.  Indeed I don't seem to be able to comment anymore.  Anyone
>>> want to add me (rjosal) to a role that can comment or create?
>>>
>>> Ryan
>>>
>>>
>>> On Thursday, April 21, 2016, David Smiley 
>>> wrote:
>>>
 Wow!  My reading of this is that the general public (i.e. not
 committers) won't be able to really do anything other than view JIRA issues
 unless we expressly add individuals to a specific project group?  :-(
  Clearly that sucks big time.  Is anyone reading this differently?
 Assuming this is true... at this point maybe there is nothing to do but
 wait until the inevitable requests come in for people to create/comment.
 Maybe send a message to the user lists?

 ~ David

 -- Forwarded message -
 From: Gav 
 Date: Fri, Apr 22, 2016 at 12:14 AM
 Subject: Jira Spam - And changes made as a result.
 To: infrastruct...@apache.org Infrastructure 


 Hi All,

 Apologies for notifying you after the fact.

 Earlier today (slowing down to a halt about 1/2 hr ago due to our
 changes) we had a
 big Spam attack directed at the ASF Jira instance.

 Many project were affected, including :-

 TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
 AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .

 During the process we ended up banning 27 IP addresses , deleted well
 over 200 tickets, and about 2 dozen user accounts.

 The spammers were creating accounts using the normal system and going
 through the required captchas.

 In addition to the ban hammer and deletions and to prevent more spam
 coming in, we changed the 'Default Permissions Scheme' so that anyone in
 the 'jira-users' group are no longer allowed to 'Create' tickets and are no
 longer allowed to 'Comment' on any tickets.

 Obviously that affects genuine users as well as the spammers, we know
 that.

 Replacement auth instead of jira-users group now includes allowing
 those in the 'Administrator, PMC, Committer, Contributor and Developer'
 ROLES in jira.

 Projects would you please assist in making this work - anyone that is
 not in any of those roles for your project; and needs access to be able to
 create issues and comment, please do add their jira id to one of the
 available roles. (Let us know if you need assistance in this area)

 This is a short term solution. For the medium to long term we are
 working on providing LDAP authentication for Jira and Confluence through
 Atlassian Crowd (likley).

 If any projects are still being affected, please notify us as you may
 be using another permissions scheme to the one altered. Notify us via INFRA
 jira ticket or reply to this mail to infrastruct...@apache.org or join
 us on hipchat (https://www.hipchat.com/gIjVtYcNy)

 Any project seriously adversely impacted by our changes please do come
 talk to us and we'll see what we can work out.

 Thanks all for your patience and understanding.

 Gav... (ASF Infra)
 --
 Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
 LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
 http://www.solrenterprisesearchserver.com

>>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>


Re: Jira Spam - And changes made as a result.

2016-04-22 Thread Ishan Chattopadhyaya
Anshum, please add me as well. Thanks.

On Sat, Apr 23, 2016 at 6:01 AM, Anshum Gupta 
wrote:

> Hi Ryan,
>
> I've added you to the contributors group. You should be able to comment on
> JIRAs now.
>
> On Thu, Apr 21, 2016 at 8:51 PM, Ryan Josal  wrote:
>
>> Woah, yeah, I have filed a few bugs as well as posted patches and
>> comments.  Indeed I don't seem to be able to comment anymore.  Anyone
>> want to add me (rjosal) to a role that can comment or create?
>>
>> Ryan
>>
>>
>> On Thursday, April 21, 2016, David Smiley 
>> wrote:
>>
>>> Wow!  My reading of this is that the general public (i.e. not
>>> committers) won't be able to really do anything other than view JIRA issues
>>> unless we expressly add individuals to a specific project group?  :-(
>>>  Clearly that sucks big time.  Is anyone reading this differently?
>>> Assuming this is true... at this point maybe there is nothing to do but
>>> wait until the inevitable requests come in for people to create/comment.
>>> Maybe send a message to the user lists?
>>>
>>> ~ David
>>>
>>> -- Forwarded message -
>>> From: Gav 
>>> Date: Fri, Apr 22, 2016 at 12:14 AM
>>> Subject: Jira Spam - And changes made as a result.
>>> To: infrastruct...@apache.org Infrastructure 
>>>
>>>
>>> Hi All,
>>>
>>> Apologies for notifying you after the fact.
>>>
>>> Earlier today (slowing down to a halt about 1/2 hr ago due to our
>>> changes) we had a
>>> big Spam attack directed at the ASF Jira instance.
>>>
>>> Many project were affected, including :-
>>>
>>> TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
>>> AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .
>>>
>>> During the process we ended up banning 27 IP addresses , deleted well
>>> over 200 tickets, and about 2 dozen user accounts.
>>>
>>> The spammers were creating accounts using the normal system and going
>>> through the required captchas.
>>>
>>> In addition to the ban hammer and deletions and to prevent more spam
>>> coming in, we changed the 'Default Permissions Scheme' so that anyone in
>>> the 'jira-users' group are no longer allowed to 'Create' tickets and are no
>>> longer allowed to 'Comment' on any tickets.
>>>
>>> Obviously that affects genuine users as well as the spammers, we know
>>> that.
>>>
>>> Replacement auth instead of jira-users group now includes allowing those
>>> in the 'Administrator, PMC, Committer, Contributor and Developer' ROLES in
>>> jira.
>>>
>>> Projects would you please assist in making this work - anyone that is
>>> not in any of those roles for your project; and needs access to be able to
>>> create issues and comment, please do add their jira id to one of the
>>> available roles. (Let us know if you need assistance in this area)
>>>
>>> This is a short term solution. For the medium to long term we are
>>> working on providing LDAP authentication for Jira and Confluence through
>>> Atlassian Crowd (likley).
>>>
>>> If any projects are still being affected, please notify us as you may be
>>> using another permissions scheme to the one altered. Notify us via INFRA
>>> jira ticket or reply to this mail to infrastruct...@apache.org or join
>>> us on hipchat (https://www.hipchat.com/gIjVtYcNy)
>>>
>>> Any project seriously adversely impacted by our changes please do come
>>> talk to us and we'll see what we can work out.
>>>
>>> Thanks all for your patience and understanding.
>>>
>>> Gav... (ASF Infra)
>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>>
>>
>
>
> --
> Anshum Gupta
>


Re: Jira Spam - And changes made as a result.

2016-04-22 Thread Anshum Gupta
Hi Ryan,

I've added you to the contributors group. You should be able to comment on
JIRAs now.

On Thu, Apr 21, 2016 at 8:51 PM, Ryan Josal  wrote:

> Woah, yeah, I have filed a few bugs as well as posted patches and
> comments.  Indeed I don't seem to be able to comment anymore.  Anyone
> want to add me (rjosal) to a role that can comment or create?
>
> Ryan
>
>
> On Thursday, April 21, 2016, David Smiley 
> wrote:
>
>> Wow!  My reading of this is that the general public (i.e. not committers)
>> won't be able to really do anything other than view JIRA issues unless we
>> expressly add individuals to a specific project group?  :-(  Clearly that
>> sucks big time.  Is anyone reading this differently?  Assuming this is
>> true... at this point maybe there is nothing to do but wait until the
>> inevitable requests come in for people to create/comment.  Maybe send a
>> message to the user lists?
>>
>> ~ David
>>
>> -- Forwarded message -
>> From: Gav 
>> Date: Fri, Apr 22, 2016 at 12:14 AM
>> Subject: Jira Spam - And changes made as a result.
>> To: infrastruct...@apache.org Infrastructure 
>>
>>
>> Hi All,
>>
>> Apologies for notifying you after the fact.
>>
>> Earlier today (slowing down to a halt about 1/2 hr ago due to our
>> changes) we had a
>> big Spam attack directed at the ASF Jira instance.
>>
>> Many project were affected, including :-
>>
>> TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
>> AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .
>>
>> During the process we ended up banning 27 IP addresses , deleted well
>> over 200 tickets, and about 2 dozen user accounts.
>>
>> The spammers were creating accounts using the normal system and going
>> through the required captchas.
>>
>> In addition to the ban hammer and deletions and to prevent more spam
>> coming in, we changed the 'Default Permissions Scheme' so that anyone in
>> the 'jira-users' group are no longer allowed to 'Create' tickets and are no
>> longer allowed to 'Comment' on any tickets.
>>
>> Obviously that affects genuine users as well as the spammers, we know
>> that.
>>
>> Replacement auth instead of jira-users group now includes allowing those
>> in the 'Administrator, PMC, Committer, Contributor and Developer' ROLES in
>> jira.
>>
>> Projects would you please assist in making this work - anyone that is not
>> in any of those roles for your project; and needs access to be able to
>> create issues and comment, please do add their jira id to one of the
>> available roles. (Let us know if you need assistance in this area)
>>
>> This is a short term solution. For the medium to long term we are working
>> on providing LDAP authentication for Jira and Confluence through Atlassian
>> Crowd (likley).
>>
>> If any projects are still being affected, please notify us as you may be
>> using another permissions scheme to the one altered. Notify us via INFRA
>> jira ticket or reply to this mail to infrastruct...@apache.org or join
>> us on hipchat (https://www.hipchat.com/gIjVtYcNy)
>>
>> Any project seriously adversely impacted by our changes please do come
>> talk to us and we'll see what we can work out.
>>
>> Thanks all for your patience and understanding.
>>
>> Gav... (ASF Infra)
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>


-- 
Anshum Gupta


[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 3 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/3/

3 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([1CE1D579B8F796DC:8615A89B266D0AE0]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:754)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:747)
... 40 more


FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete

Error Message:
Got a soft commit we weren't expecting

Stack Trace:

[jira] [Closed] (SOLR-8017) solr.PointType can't deal with coordination in format like (0.9504547, 1.0, 1.0890503)

2016-04-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-8017.
--
Resolution: Won't Fix

Peter, this is as-designed.  PointType (and other spatial field types) accept 
the data in either one format, or sometimes maybe another specific one or two 
as well.  It is quite common to need to do some little bits of data 
manipulation (sometimes a lot, sometimes a little) when getting data from 
whatever format it's in, into Solr.  Adding an URP is one way to handle this, 
or deal with it before handing it to Solr.  If you go the URP route, see 
{{RegexReplaceProcessorFactory}} which should handle your case easily.

> solr.PointType can't deal with coordination in format like (0.9504547, 1.0, 
> 1.0890503)
> --
>
> Key: SOLR-8017
> URL: https://issues.apache.org/jira/browse/SOLR-8017
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.2
>Reporter: wangshanshan
>Priority: Minor
>
> In jpg picture files there will be some fields like media_white_point, 
> media_black_point, which in format like (0.9504547, 1.0, 1.0890503).
> But solr.PointType can't deal with the "(", it just splis by comma and let 
> Double.parse  deal with a string like "(0.9504547".
> In this case, a NumberFormatException will be raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16564 - Failure!

2016-04-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16564/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:43449/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:43449/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([8067326B28E5BBCE:8330DB18619D636]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 3 - Failure

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/3/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([EEB8485A3AFFE3E3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=13602, name=searcherExecutor-4720-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=13602, name=searcherExecutor-4720-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([EEB8485A3AFFE3E3]:0)


FAILED:  

[jira] [Updated] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9028:
---
Attachment: SOLR-9028.patch


bq. Fails for me on OS X 10.11.4 ...

Thanks steve,

The failure you're seeing seems to jive with what was reported in SOLR-3854 
when the choice was made to explicitly disable all clientAuth testing on OSX -- 
it would be nice to get to the bottom of that, and i have some theories (see 
nocommits still in patch) but i'm not going to stress out about it too much 
just yet.

Here's an update patch that resolves most of the nocommits in 
TestMiniSolrCloudClusterSSL.

I still need to review & sanity check one expected failure case in that class, 
and I want to write another "test the test" class that _does_ rely on 
SolrTestCaseJ4's randomization logic to initialize a SSLTestConfig, but then 
spot checks that the clients/servers created by 
SolrTestCaseJ4/MiniSolrCloudCluster match the expectations based on whats in 
SSLTestConfig.


> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC2 Release apache-solr-ref-guide-6.0.pdf

2016-04-22 Thread Jan Høydahl
Cassandra, what I meant was figures/images with a caption and a number,
and then in the text, you write “See Image 14”. However, the software
can choose to put Image 14 on the next physical page if it does not fit
in well right after the current paragraph.

See an explanation from Tex here: 
https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions

However, we are limited to whatever Confluence PDF export supports...

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 22. apr. 2016 kl. 13.41 skrev Cassandra Targett :
> 
> Thanks all for your votes - the vote passed, and I'll start the
> publication process this morning.
> 
> In regard to the issues raised:
> 
> The image split across pages was done to solve a problem with images
> overlapping text. For context, please see the thread from the 5.3
> Guide (http://markmail.org/message/jnvg7avsbl4fwznv) and the PDF
> Export Changelist
> (https://cwiki.apache.org/confluence/display/solr/PDF+Export+Changelist)
> for August 2015.
> 
> Atlassian provides a rather minimal output tool in the form of the PDF
> exporter, and in recent releases has not oriented many Confluence
> features & fixes to our use case. For as long as we use Confluence,
> the lack of control over the output of the PDF will be a persistent
> problem. IMO, the word breaks at the end of lines is a much more
> serious problem, and there is nothing we can do about that either.
> However, all is not lost, I have some ideas for solutions that I hope
> to be able to share soon.
> 
> bq. Perhaps the feature of numbered images is not so stupid after all,
> letting the software rearrange images as it see fit to avoid splitting
> or huge open white spaces at the bottom of pages.
> 
> Jan, I'm not sure I understand I understand your suggestion here? The
> idea is that if the images were numbered they'd fit better on the
> page? I'm confused how those relate to each other.
> 
> On Fri, Apr 22, 2016 at 4:18 AM, Jan Høydahl  wrote:
>> +1
>> 
>> Agree with Tomás’ comments. Checked 5.3 refguide as well, and the image
>> split across page break exists there too, but by coincidence most
>> screenshots were in the middle of pages :) I think for 6.0 we got a worst
>> possible placement :) Perhaps the feature of numbered images is not so
>> stupid after all, letting the software rearrange images as it see fit to
>> avoid splitting or huge open white spaces at the bottom of pages. But
>> Confluence probably does not support that?
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>> 22. apr. 2016 kl. 03.00 skrev Tomás Fernández Löbbe :
>> 
>> +1
>> 
>> Not a blocker, but looks like many times images are broken into 2 pages
>> (happens for example for many of the admin UI screenshot, but also with
>> smaller images like in the Spatial Filers section). Is there a way to
>> prevent this in Confluence?
>> Also, we should try to prevent pasting extremely long examples, some example
>> outputs take ~3 pages
>> 
>> On Thu, Apr 21, 2016 at 1:34 PM, Joel Bernstein  wrote:
>>> 
>>> +1
>>> 
>>> Joel Bernstein
>>> http://joelsolr.blogspot.com/
>>> 
>>> On Thu, Apr 21, 2016 at 3:53 PM, Cassandra Targett 
>>> wrote:
 
 Reminder to VOTE on this thread so we can get the Ref Guide released.
 
 Thanks,
 Cassandra
 
 On Mon, Apr 18, 2016 at 6:13 PM, Steve Rowe  wrote:
> +1
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Apr 18, 2016, at 5:59 PM, Cassandra Targett 
>> wrote:
>> 
>> Please VOTE to release the Apache Solr Ref Guide for 6.0:
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.0-RC2/
>> 
>> $ cat apache-solr-ref-guide-6.0.pdf.sha1
>> 9073530b89148ce3f641a42e38249bd1fbb25136
>> apache-solr-ref-guide-6.0.pdf
>> 
>> Here's my +1.
>> 
>> * Note, RC1 was skipped because there were a few other issues to be
>> fixed right after I'd committed it.
>> 
>> Thanks,
>> Cassandra
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
>>> 
>> 
>> 
> 
> -
> To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-8715) New Admin UI's Schema screen fails for some fields

2016-04-22 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254796#comment-15254796
 ] 

Alexandre Rafalovitch commented on SOLR-8715:
-

You could. I just did it because it is used in the loop later as well. So, I 
wanted to make sure I definitely tested the same content.

Oh and also, this only affected fields that had any content. So, text_rev field 
is also affected *if* any content is posted to it. Maybe this will have to 
consolidate JIRA issue title to something more meaningful.

> New Admin UI's Schema screen fails for some fields
> --
>
> Key: SOLR-8715
> URL: https://issues.apache.org/jira/browse/SOLR-8715
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.5, 6.0
> Environment: mac, firefox
>Reporter: Alexandre Rafalovitch
>Assignee: Upayavira
>  Labels: admin-interface
> Attachments: Problem shown in the released 5.5 version.png
>
>
> In techproducts example, using new Admin UI and trying to load the Schema for 
> text field causes blank screen and the Javascript error in the developer 
> console:
> {noformat}
> Error: row.flags is undefined
> getFieldProperties@http://localhost:8983/solr/js/angular/controllers/schema.js:482:40
> $scope.refresh/http://localhost:8983/solr/js/angular/controllers/schema.js:76:38
> 
> {noformat}
> Tested with 5.5rc3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254753#comment-15254753
 ] 

Uwe Schindler commented on LUCENE-7248:
---

bq. I disagree that it has nothing to do with LockFactory.

Sorry the problem will still exists if you interrupt IndexWriter and it writes 
files at the same time. You have to stop interrupting indexwriter. The problem 
is not in Lucene it is in the software that sends interrupts to threads by 
Lucene. This is a no-go, sorry. If you cannot fix this, your only chance is to 
use RAFDirectory, but this slows down reading from index.

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 472 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/472/

No tests ran.

Build Log:
[...truncated 40517 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (14.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.02 sec (1181.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 62.9 MB in 0.05 sec (1184.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.5 MB in 0.06 sec (1198.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5995 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5995 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (119.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 37.7 MB in 0.90 sec (42.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 132.0 MB in 2.35 sec (56.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 140.6 MB in 2.99 sec (47.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]  

[jira] [Commented] (SOLR-8716) Upgrade to Apache Tika 1.12

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254692#comment-15254692
 ] 

ASF GitHub Bot commented on SOLR-8716:
--

Github user lewismc commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60803373
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

Updated... thanks


> Upgrade to Apache Tika 1.12
> ---
>
> Key: SOLR-8716
> URL: https://issues.apache.org/jira/browse/SOLR-8716
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Lewis John McGibbney
>Assignee: Uwe Schindler
> Fix For: master
>
> Attachments: LUCENE-7041.patch
>
>
> We recently released Apache Tika 1.12. In order to use the fixes provided 
> within the Tika.translate API I propose to upgrade Tika from 1.7 --> 1.12 in 
> lucene/ivy-versions.properties.
> Patch coming up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8716 Upgrade to Apache Tika 1.12

2016-04-22 Thread lewismc
Github user lewismc commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60803373
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

Updated... thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9032) Alias creation fails in new UI

2016-04-22 Thread Upayavira (JIRA)
Upayavira created SOLR-9032:
---

 Summary: Alias creation fails in new UI
 Key: SOLR-9032
 URL: https://issues.apache.org/jira/browse/SOLR-9032
 Project: Solr
  Issue Type: Bug
  Components: UI
Affects Versions: 6.0
Reporter: Upayavira
Assignee: Upayavira
 Fix For: 6.0.1


Using the Collections UI to create an alias makes a call like this:

http://$HOST:8983/solr/admin/collections?_=1461358635047=CREATEALIAS=%5Bobject+Object%5D=assets=json

The collections param is effectively [object Object] which is clearly wrong, 
and should be a comma separated list of collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254638#comment-15254638
 ] 

Alexandre Philbert edited comment on LUCENE-7248 at 4/22/16 8:58 PM:
-

I disagree that it has *nothing* to do with LockFactory. According to the stack 
trace, it fails when trying to release a lock using NativeFSLockFactory. Using 
the two others that I mentioned it doesn't fail anymore.. Probably because 
there aren't any locks (in the case of NoLockFactory) or they are handled 
differently (in the case of SingleInstanceFactory).
Although it's true that it probably is risky. :/

EDIT: What about SimpleFSLockFactory? The only downside I saw was that it 
doesn't clear the "write.lock" when the application shuts down.. Are there any 
other downsides?


was (Author: pheelbert):
I disagree that it has *nothing* to do with LockFactory. According to the stack 
trace, it fails when trying to release a lock using NativeFSLockFactory. Using 
the two others that I mentioned it doesn't fail anymore.. Probably because 
there aren't any locks (in the case of NoLockFactory) or they are handled 
differently (in the case of SingleInstanceFactory).
Although it's true that it probably is risky. :/

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> 

[jira] [Created] (SOLR-9031) add more builders for xmlparser

2016-04-22 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9031:
--

 Summary: add more builders for xmlparser
 Key: SOLR-9031
 URL: https://issues.apache.org/jira/browse/SOLR-9031
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Mikhail Khludnev


[xmlparser|https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-XMLQueryParser]
 it's a great step toward, but it worth to add more query types to make it 
useful:
# {{parent}} and {{child}} block joins
# query time {{join}} 
# {{EDisMax}} it's worth to provide all its' cool functionality right in the 
xml (existing 
[DisjunctionMaxQueryBuilder|http://lucene.apache.org/core/6_0_0/queryparser/org/apache/lucene/queryparser/xml/builders/DisjunctionMaxQueryBuilder.html]
 is too neat).  
# {{field}} to invoke field type analysis. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7244) Geo3d test failure

2016-04-22 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7244.
-
   Resolution: Fixed
Fix Version/s: 6.x
   master

> Geo3d test failure
> --
>
> Key: LUCENE-7244
> URL: https://issues.apache.org/jira/browse/LUCENE-7244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
>
> Reproduce with:
> {code}
> ant test  -Dtestcase=TestGeo3DPoint -Dtests.method=testRandomMedium 
> -Dtests.seed=D108F235D165413A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=mk -Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254638#comment-15254638
 ] 

Alexandre Philbert commented on LUCENE-7248:


I disagree that it has *nothing* to do with LockFactory. According to the stack 
trace, it fails when trying to release a lock using NativeFSLockFactory. Using 
the two others that I mentioned it doesn't fail anymore.. Probably because 
there aren't any locks (in the case of NoLockFactory) or they are handled 
differently (in the case of SingleInstanceFactory).
Although it's true that it probably is risky. :/

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   at 

[jira] [Commented] (LUCENE-7244) Geo3d test failure

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254627#comment-15254627
 ] 

ASF subversion and git services commented on LUCENE-7244:
-

Commit 38ebd906e830e793d7df364163f0baab049ffa47 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=38ebd90 ]

LUCENE-7244: Complain if the holes are outside the polygon.


> Geo3d test failure
> --
>
> Key: LUCENE-7244
> URL: https://issues.apache.org/jira/browse/LUCENE-7244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Reproduce with:
> {code}
> ant test  -Dtestcase=TestGeo3DPoint -Dtests.method=testRandomMedium 
> -Dtests.seed=D108F235D165413A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=mk -Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7244) Geo3d test failure

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254618#comment-15254618
 ] 

ASF subversion and git services commented on LUCENE-7244:
-

Commit 38c0915572333f1f77efb43028fe91927df8464d in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=38c0915 ]

LUCENE-7244: Complain if the holes are outside the polygon.


> Geo3d test failure
> --
>
> Key: LUCENE-7244
> URL: https://issues.apache.org/jira/browse/LUCENE-7244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Reproduce with:
> {code}
> ant test  -Dtestcase=TestGeo3DPoint -Dtests.method=testRandomMedium 
> -Dtests.seed=D108F235D165413A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=mk -Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254611#comment-15254611
 ] 

Uwe Schindler commented on LUCENE-7248:
---

bq. I tried switching FSDirectory implementations and LockFactory and it seems 
that switching to other LockFactory implementations "fixes" the issue

The whole this has really nothing to do with LockFactory. Changing it is just 
risky and helps nothing.

bq. Are there any terrible things that can happen using 
SingleInstanceLockFactory or NoLockFactory?

Your index breaks and gets unuseable if you accidentally open two IndexWriters 
at the same time.

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at 

[jira] [Commented] (SOLR-8716) Upgrade to Apache Tika 1.12

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254601#comment-15254601
 ] 

ASF GitHub Bot commented on SOLR-8716:
--

Github user uschindler commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60798688
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

JSDC -> JDBC


> Upgrade to Apache Tika 1.12
> ---
>
> Key: SOLR-8716
> URL: https://issues.apache.org/jira/browse/SOLR-8716
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Lewis John McGibbney
>Assignee: Uwe Schindler
> Fix For: master
>
> Attachments: LUCENE-7041.patch
>
>
> We recently released Apache Tika 1.12. In order to use the fixes provided 
> within the Tika.translate API I propose to upgrade Tika from 1.7 --> 1.12 in 
> lucene/ivy-versions.properties.
> Patch coming up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8716 Upgrade to Apache Tika 1.12

2016-04-22 Thread uschindler
Github user uschindler commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60798688
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

JSDC -> JDBC


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254579#comment-15254579
 ] 

Joel Bernstein commented on SOLR-9027:
--

Actually more tests have shown that I'm not collecting the docFreq properly. 
I'll need to take the exact same approach as the CommonTermsQuery in doing 
this. More work to do here.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1099 - Failure

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1099/

2 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds

Error Message:
2: soft wasn't fast enough

Stack Trace:
java.lang.AssertionError: 2: soft wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([11ED354FD381E3CF:4DF89B763803A2B7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds(SoftAutoCommitTest.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 

[jira] [Comment Edited] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254508#comment-15254508
 ] 

Alexandre Philbert edited comment on LUCENE-7248 at 4/22/16 8:13 PM:
-

Okay thanks for the detailed responses. I'll look into Scott's 'workaround' or 
if RAFDirectory solves the problem.

EDIT: I tried switching FSDirectory implementations and LockFactory and it 
seems that switching to other LockFactory implementations "fixes" the issue... 
I'm currently testing locally so it's not confirmed yet. Are there any terrible 
things that can happen using SingleInstanceLockFactory or NoLockFactory?


was (Author: pheelbert):
Okay thanks for the detailed responses. I'll look into Scott's 'workaround' or 
if RAFDirectory solves the problem.

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 

[jira] [Commented] (SOLR-8716) Upgrade to Apache Tika 1.12

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254567#comment-15254567
 ] 

ASF GitHub Bot commented on SOLR-8716:
--

Github user lewismc commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60795874
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

The last entry? SQLite

On Friday, April 22, 2016, Uwe Schindler  wrote:

> In solr/NOTICE.txt
> :
>
> > +databases
> > +http://jackcess.sourceforge.net/
> > +
> > +Jackcess Encrypt; an extension library for the Jackcess project which
> > +implements support for some forms of Microsoft Access and Microsoft
> > +Money encryption
> > +http://jackcessencrypt.sourceforge.net/
> > +
> > +ROME; is a Java framework for RSS and Atom feeds
> > +(https://github.com/rometools/rome)
> > +
> > +VorbisJava; Ogg and Vorbis Tools for Java
> > +Copyright 2012 Nick Burch
> > +https://github.com/Gagravarr/VorbisJava
> > +
> > +SQLite JSDC Driver; is a library for accessing and creating SQLite
>
> This is a typo, I think.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly or view it on GitHub
> 

>


-- 
*Lewis*



> Upgrade to Apache Tika 1.12
> ---
>
> Key: SOLR-8716
> URL: https://issues.apache.org/jira/browse/SOLR-8716
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Lewis John McGibbney
>Assignee: Uwe Schindler
> Fix For: master
>
> Attachments: LUCENE-7041.patch
>
>
> We recently released Apache Tika 1.12. In order to use the fixes provided 
> within the Tika.translate API I propose to upgrade Tika from 1.7 --> 1.12 in 
> lucene/ivy-versions.properties.
> Patch coming up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8716 Upgrade to Apache Tika 1.12

2016-04-22 Thread lewismc
Github user lewismc commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60795874
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

The last entry? SQLite

On Friday, April 22, 2016, Uwe Schindler  wrote:

> In solr/NOTICE.txt
> :
>
> > +databases
> > +http://jackcess.sourceforge.net/
> > +
> > +Jackcess Encrypt; an extension library for the Jackcess project which
> > +implements support for some forms of Microsoft Access and Microsoft
> > +Money encryption
> > +http://jackcessencrypt.sourceforge.net/
> > +
> > +ROME; is a Java framework for RSS and Atom feeds
> > +(https://github.com/rometools/rome)
> > +
> > +VorbisJava; Ogg and Vorbis Tools for Java
> > +Copyright 2012 Nick Burch
> > +https://github.com/Gagravarr/VorbisJava
> > +
> > +SQLite JSDC Driver; is a library for accessing and creating SQLite
>
> This is a typo, I think.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly or view it on GitHub
> 

>


-- 
*Lewis*



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8716) Upgrade to Apache Tika 1.12

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254548#comment-15254548
 ] 

ASF GitHub Bot commented on SOLR-8716:
--

Github user uschindler commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60794497
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

This is a typo, I think.


> Upgrade to Apache Tika 1.12
> ---
>
> Key: SOLR-8716
> URL: https://issues.apache.org/jira/browse/SOLR-8716
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Lewis John McGibbney
>Assignee: Uwe Schindler
> Fix For: master
>
> Attachments: LUCENE-7041.patch
>
>
> We recently released Apache Tika 1.12. In order to use the fixes provided 
> within the Tika.translate API I propose to upgrade Tika from 1.7 --> 1.12 in 
> lucene/ivy-versions.properties.
> Patch coming up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8716 Upgrade to Apache Tika 1.12

2016-04-22 Thread uschindler
Github user uschindler commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/31#discussion_r60794497
  
--- Diff: solr/NOTICE.txt ---
@@ -396,6 +394,33 @@ https://github.com/rjohnsondev/java-libpst
 JMatIO is a JAVA library to read/write/manipulate with Matlab binary 
MAT-files.
 http://www.sourceforge.net/projects/jmatio
 
+metadata-extractor is a straightforward Java library for reading metadata 
+from image files.
+https://github.com/drewnoakes/metadata-extractor
+
+Java MP4 Parser; A Java API to read, write and create MP4 container
+https://github.com/sannies/mp4parser
+
+Jackcess; is a pure Java library for reading from and writing to MS Access 
+databases
+http://jackcess.sourceforge.net/
+
+Jackcess Encrypt; an extension library for the Jackcess project which 
+implements support for some forms of Microsoft Access and Microsoft 
+Money encryption
+http://jackcessencrypt.sourceforge.net/
+
+ROME; is a Java framework for RSS and Atom feeds
+(https://github.com/rometools/rome)
+
+VorbisJava; Ogg and Vorbis Tools for Java
+Copyright 2012 Nick Burch
+https://github.com/Gagravarr/VorbisJava
+
+SQLite JSDC Driver; is a library for accessing and creating SQLite 
--- End diff --

This is a typo, I think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8659) Improve Solr JDBC Driver to support more SQL Clients

2016-04-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195201#comment-15195201
 ] 

Kevin Risden edited comment on SOLR-8659 at 4/22/16 7:55 PM:
-

A few more languages/clients that could be useful to test at some point:
* R - SOLR-9019 & SOLR-9021
** RJDBC - https://cran.r-project.org/web/packages/RJDBC/index.html
* Python/Jython - SOLR-9011 & SOLR-9013 & SOLR-9018
** https://wiki.python.org/jython/DatabaseExamples
** https://pypi.python.org/pypi/JayDeBeApi/
** http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html


was (Author: risdenk):
A few more languages/clients that could be useful to test at some point:
* R
** RJDBC - https://cran.r-project.org/web/packages/RJDBC/index.html
* Python/Jython
** https://wiki.python.org/jython/DatabaseExamples
** https://pypi.python.org/pypi/JayDeBeApi/
** http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html

> Improve Solr JDBC Driver to support more SQL Clients
> 
>
> Key: SOLR-8659
> URL: https://issues.apache.org/jira/browse/SOLR-8659
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: 
> iODBC_Demo__Unicode__-_Connected_to__remotesolr__and_Attach_screenshot_-_ASF_JIRA.png
>
>
> SOLR-8502 was a great start to getting JDBC support to be more complete. This 
> ticket is to track items that could further improve the JDBC support for more 
> SQL clients and their features. A few SQL clients are:
> * DbVisualizer
> * SQuirrel SQL
> * Apache Zeppelin (incubating)
> * Spark
> * Python & Jython
> * IntelliJ IDEA Database Tool
> * ODBC clients like Excel/Tableau



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9025) add SolrCoreTest.testImplicitPlugins test

2016-04-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9025.
---
   Resolution: Fixed
Fix Version/s: 5.6
   6.1
   master

> add SolrCoreTest.testImplicitPlugins test
> -
>
> Key: SOLR-9025
> URL: https://issues.apache.org/jira/browse/SOLR-9025
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master, 6.1, 5.6
>
> Attachments: SOLR-9025.patch
>
>
> Various places in the code assume that certain implicit handlers are 
> configured on certain paths (e.g. {{/replication}} is referenced by 
> {{RecoveryStrategy}} and {{IndexFetcher}}). This test tests that the 
> {{ImplicitPlugins.json}} content configures the expected paths and class 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9025) add SolrCoreTest.testImplicitPlugins test

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254520#comment-15254520
 ] 

ASF subversion and git services commented on SOLR-9025:
---

Commit 5f471fb60ccab679b044a06afbdf4e0aa0f1f825 in lucene-solr's branch 
refs/heads/branch_5x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5f471fb ]

SOLR-9025: Add SolrCoreTest.testImplicitPlugins test.


> add SolrCoreTest.testImplicitPlugins test
> -
>
> Key: SOLR-9025
> URL: https://issues.apache.org/jira/browse/SOLR-9025
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9025.patch
>
>
> Various places in the code assume that certain implicit handlers are 
> configured on certain paths (e.g. {{/replication}} is referenced by 
> {{RecoveryStrategy}} and {{IndexFetcher}}). This test tests that the 
> {{ImplicitPlugins.json}} content configures the expected paths and class 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254508#comment-15254508
 ] 

Alexandre Philbert commented on LUCENE-7248:


Okay thanks for the detailed responses. I'll look into Scott's 'workaround' or 
if RAFDirectory solves the problem.

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:733)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1471)
>   at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
>   at 
> 

[jira] [Updated] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9027:
-
Attachment: SOLR-9027.patch

Simple test cases that shows the maxDocFreq param working. I'll expand on these 
and do some manual testing for performance.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7249.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254494#comment-15254494
 ] 

ASF subversion and git services commented on LUCENE-7249:
-

Commit c3f62d1a79188420989775027f40348b17c5ced2 in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3f62d1 ]

LUCENE-7249: LatLonPoint polygon should use tree relate()


> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254491#comment-15254491
 ] 

ASF subversion and git services commented on LUCENE-7249:
-

Commit 88c9da6c899c7015f6c9a818a4a4f91984022254 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=88c9da6 ]

LUCENE-7249: LatLonPoint polygon should use tree relate()


> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254490#comment-15254490
 ] 

Robert Muir commented on LUCENE-7248:
-

There isn't a just in case. The javadocs are clear: 
https://docs.oracle.com/javase/7/docs/api/java/nio/channels/InterruptibleChannel.html

{quote}
If a thread is blocked in an I/O operation on an interruptible channel then 
another thread may invoke the blocked thread's interrupt method. This will 
cause the channel to be closed
{quote}

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:733)
>   at 
> 

[jira] [Updated] (SOLR-8824) SolrJ JDBC - Apache Zeppelin JDBC documentation

2016-04-22 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8824:
---
Attachment: docker_local_8080___interpreter.png

> SolrJ JDBC - Apache Zeppelin JDBC documentation
> ---
>
> Key: SOLR-8824
> URL: https://issues.apache.org/jira/browse/SOLR-8824
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: docker_local_8080___interpreter.png, 
> solr_jdbc_zeppelin_20160311.pdf
>
>
> SOLR-8786 demonstrated that the Solr JDBC driver is usable from Apache 
> Zeppelin. It would be great to have this documented like SOLR-8521



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8824) SolrJ JDBC - Apache Zeppelin JDBC documentation

2016-04-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254459#comment-15254459
 ] 

Kevin Risden commented on SOLR-8824:


Another way to add the SolrJ JDBC driver is to use the Maven artifact 
org.apache.solr:solr-solrj:6.0.0 when creating the JDBC interpreter. I attached 
a screenshot of that approach.

> SolrJ JDBC - Apache Zeppelin JDBC documentation
> ---
>
> Key: SOLR-8824
> URL: https://issues.apache.org/jira/browse/SOLR-8824
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: docker_local_8080___interpreter.png, 
> solr_jdbc_zeppelin_20160311.pdf
>
>
> SOLR-8786 demonstrated that the Solr JDBC driver is usable from Apache 
> Zeppelin. It would be great to have this documented like SOLR-8521



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 160 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/160/

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val modified' for path 'response/params/y/c' 
full output: {   "responseHeader":{ "status":0, "QTime":0},   
"response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", 
"b":"B val", "":{"v":0},  from server:  
http://127.0.0.1:36457/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val modified' for 
path 'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  http://127.0.0.1:36457/collection1
at 
__randomizedtesting.SeedInfo.seed([BBE55EE15CAE63B4:33B1613BF2520E4C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:195)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254419#comment-15254419
 ] 

Alexandre Philbert commented on LUCENE-7248:


Thanks for the quick responses! I'm currently looking into the this project's 
code *just in case*. :p

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:733)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1471)
>   at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
>   at 
> 

[jira] [Commented] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254411#comment-15254411
 ] 

Ryan Ernst commented on LUCENE-7249:


Real polygons are starting to move! +1

> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254400#comment-15254400
 ] 

Robert Muir commented on LUCENE-7248:
-

The documentation on FSDirectory warns you about ClosedChannelException. 
Unfortunately: there is not really anything we can do about it in lucene: its a 
jvm thing.
{quote}
 * NOTE: Accessing one of the above subclasses either directly or
 * indirectly from a thread while it's interrupted can close the
 * underlying channel immediately if at the same time the thread is
 * blocked on IO. The channel will remain closed and subsequent access
 * to the index will throw a {@link ClosedChannelException}.
 * Applications using {@link Thread#interrupt()} or
 * {@link Future#cancel(boolean)} should use the slower legacy
 * {@code RAFDirectory} from the {@code misc} Lucene module instead.
{quote}


> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> 

[jira] [Comment Edited] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254400#comment-15254400
 ] 

Robert Muir edited comment on LUCENE-7248 at 4/22/16 6:29 PM:
--

The documentation on FSDirectory warns you about ClosedChannelException. 
Unfortunately: there is not really anything we can do about it in lucene: its a 
jvm thing.
{noformat}
 * NOTE: Accessing one of the above subclasses either directly or
 * indirectly from a thread while it's interrupted can close the
 * underlying channel immediately if at the same time the thread is
 * blocked on IO. The channel will remain closed and subsequent access
 * to the index will throw a {@link ClosedChannelException}.
 * Applications using {@link Thread#interrupt()} or
 * {@link Future#cancel(boolean)} should use the slower legacy
 * {@code RAFDirectory} from the {@code misc} Lucene module instead.
{noformat}



was (Author: rcmuir):
The documentation on FSDirectory warns you about ClosedChannelException. 
Unfortunately: there is not really anything we can do about it in lucene: its a 
jvm thing.
{quote}
 * NOTE: Accessing one of the above subclasses either directly or
 * indirectly from a thread while it's interrupted can close the
 * underlying channel immediately if at the same time the thread is
 * blocked on IO. The channel will remain closed and subsequent access
 * to the index will throw a {@link ClosedChannelException}.
 * Applications using {@link Thread#interrupt()} or
 * {@link Future#cancel(boolean)} should use the slower legacy
 * {@code RAFDirectory} from the {@code misc} Lucene module instead.
{quote}


> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> 

[jira] [Commented] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254386#comment-15254386
 ] 

Scott Blum commented on LUCENE-7248:


We fixed a bunch of cases where Solr could interrupt an IndexWriter a while 
back: https://issues.apache.org/jira/browse/SOLR-7956
It would be nice to see a fix at the Lucene level that would prevent an 
IndexWriter from becoming permanently corrupted.

> Interrupting IndexWriter causing unhandled ClosedChannelException
> -
>
> Key: LUCENE-7248
> URL: https://issues.apache.org/jira/browse/LUCENE-7248
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3
>Reporter: Alexandre Philbert
>  Labels: exception, interrupt, lock, nio, release
>
> When interrupting the IndexWriter, sometimes an InterruptedException is 
> correctly handled but other times it isn't. When unhandled, the IndexWriter 
> 'closes' and any other operation throws AlreadyClosedException. Here is a 
> stack trace: 
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
> at java.nio.channels.FileLock.close(FileLock.java:309)
> at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
> at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
> at 
> com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
> at 
> org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
> at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
> at 
> com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
> at 
> com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
> at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
> at 
> com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
> at 
> com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
> at 
> com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
> at 
> com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
> at 
> com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
> at 
> com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
> at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
> at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
>   at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:733)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1471)
>   at 
> 

[jira] [Created] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7249:
---

 Summary: LatLonPoint polygon should use tree relate()
 Key: LUCENE-7249
 URL: https://issues.apache.org/jira/browse/LUCENE-7249
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-7249.patch

Built and tested this method on LUCENE-7239 but forgot to actually cut the code 
over to use it.

Using our tree relation methods speeds up BKD traversal. It is not important 
for tiny polygons but matters as complexity increases:

Synthetic polygons from luceneUtil
||vertices||old QPS||new QPS|
|5|40.9|40.5|
|50|33.0|33.1|
|500|31.5|31.9|
|5000|24.6|29.4|
|5|7.0|20.4|
Real polygons (33 london districts: 
http://data.london.gov.uk/2011-boundary-files)
||vertices||old QPS||new QPS|
|avg 5.6k|84.3|113.8|




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-04-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7249:

Attachment: LUCENE-7249.patch

patch

> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9025) add SolrCoreTest.testImplicitPlugins test

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254381#comment-15254381
 ] 

ASF subversion and git services commented on SOLR-9025:
---

Commit 9b8e6f1cb0fe5dd886ff148dafc6bafeb4dcbbde in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b8e6f1 ]

SOLR-9025: Add SolrCoreTest.testImplicitPlugins test.


> add SolrCoreTest.testImplicitPlugins test
> -
>
> Key: SOLR-9025
> URL: https://issues.apache.org/jira/browse/SOLR-9025
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9025.patch
>
>
> Various places in the code assume that certain implicit handlers are 
> configured on certain paths (e.g. {{/replication}} is referenced by 
> {{RecoveryStrategy}} and {{IndexFetcher}}). This test tests that the 
> {{ImplicitPlugins.json}} content configures the expected paths and class 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7248) Interrupting IndexWriter causing unhandled ClosedChannelException

2016-04-22 Thread Alexandre Philbert (JIRA)
Alexandre Philbert created LUCENE-7248:
--

 Summary: Interrupting IndexWriter causing unhandled 
ClosedChannelException
 Key: LUCENE-7248
 URL: https://issues.apache.org/jira/browse/LUCENE-7248
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Affects Versions: 5.3
Reporter: Alexandre Philbert


When interrupting the IndexWriter, sometimes an InterruptedException is 
correctly handled but other times it isn't. When unhandled, the IndexWriter 
'closes' and any other operation throws AlreadyClosedException. Here is a stack 
trace: 

java.nio.channels.ClosedChannelException
at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)
at java.nio.channels.FileLock.close(FileLock.java:309)
at 
org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.close(NativeFSLockFactory.java:194)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:97)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:84)
at 
org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2103)
at org.apache.lucene.index.IndexWriter.tragicEvent(IndexWriter.java:4574)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1487)
at 
com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
at 
org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
at 
com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
at 
com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
at 
com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
at 
com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:61)
at 
com.google.gerrit.server.index.ChangeIndexer.submit(ChangeIndexer.java:200)
at 
com.google.gerrit.server.index.ChangeIndexer.indexAsync(ChangeIndexer.java:133)
at 
com.google.gerrit.server.change.PostReviewers.addReviewers(PostReviewers.java:246)
at 
com.google.gerrit.server.change.PostReviewers.putAccount(PostReviewers.java:156)
at 
com.google.gerrit.server.change.PostReviewers.apply(PostReviewers.java:138)
at 
com.google.gerrit.sshd.commands.SetReviewersCommand.modifyOne(SetReviewersCommand.java:158)
at 
com.google.gerrit.sshd.commands.SetReviewersCommand.run(SetReviewersCommand.java:112)
at com.google.gerrit.sshd.SshCommand$1.run(SshCommand.java:48)
at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:442)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

[...]

org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:733)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1471)
at 
com.google.gerrit.lucene.AutoCommitWriter.updateDocument(AutoCommitWriter.java:100)
at 
org.apache.lucene.index.TrackingIndexWriter.updateDocument(TrackingIndexWriter.java:55)
at com.google.gerrit.lucene.SubIndex.replace(SubIndex.java:183)
at 
com.google.gerrit.lucene.LuceneChangeIndex.replace(LuceneChangeIndex.java:326)
at 
com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:243)
at 
com.google.gerrit.server.index.ChangeIndexer$IndexTask.call(ChangeIndexer.java:1)
at 

[jira] [Commented] (SOLR-9025) add SolrCoreTest.testImplicitPlugins test

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254329#comment-15254329
 ] 

ASF subversion and git services commented on SOLR-9025:
---

Commit 666472b74f2063a2a894837ee3768335bcf7f36a in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=666472b ]

SOLR-9025: Add SolrCoreTest.testImplicitPlugins test.


> add SolrCoreTest.testImplicitPlugins test
> -
>
> Key: SOLR-9025
> URL: https://issues.apache.org/jira/browse/SOLR-9025
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9025.patch
>
>
> Various places in the code assume that certain implicit handlers are 
> configured on certain paths (e.g. {{/replication}} is referenced by 
> {{RecoveryStrategy}} and {{IndexFetcher}}). This test tests that the 
> {{ImplicitPlugins.json}} content configures the expected paths and class 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8716) Upgrade to Apache Tika 1.12

2016-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254322#comment-15254322
 ] 

ASF GitHub Bot commented on SOLR-8716:
--

Github user lewismc commented on the pull request:

https://github.com/apache/lucene-solr/pull/31#issuecomment-213529331
  
Hi Uwe, as Jira is temporarily closed, I will respond here and hopefully 
the message will be queued and posted to the issue on Jira.
I agree with your comments and have updated the PR accordingly. Thanks for 
the continued review. 


> Upgrade to Apache Tika 1.12
> ---
>
> Key: SOLR-8716
> URL: https://issues.apache.org/jira/browse/SOLR-8716
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Lewis John McGibbney
>Assignee: Uwe Schindler
> Fix For: master
>
> Attachments: LUCENE-7041.patch
>
>
> We recently released Apache Tika 1.12. In order to use the fixes provided 
> within the Tika.translate API I propose to upgrade Tika from 1.7 --> 1.12 in 
> lucene/ivy-versions.properties.
> Patch coming up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8716 Upgrade to Apache Tika 1.12

2016-04-22 Thread lewismc
Github user lewismc commented on the pull request:

https://github.com/apache/lucene-solr/pull/31#issuecomment-213529331
  
Hi Uwe, as Jira is temporarily closed, I will respond here and hopefully 
the message will be queued and posted to the issue on Jira.
I agree with your comments and have updated the PR accordingly. Thanks for 
the continued review. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter or cause BadVersionException in Overseer

2016-04-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254280#comment-15254280
 ] 

Mark Miller commented on SOLR-9030:
---

bq. or a BadVersionException as well

And shouldn't we expect that that can happen and deal with it appropriately? (A 
retry or something?)

Not that something else might not be off, but it just seems like that assert is 
strange, and we should handle the case when the setData fails due to a version 
conflict - seems odd to specify a version to expect to update and then not deal 
with a failure.

> The 'downnode' command can trip asserts in ZkStateWriter or cause 
> BadVersionException in Overseer
> -
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: master, 6.1
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter or cause BadVersionException in Overseer

2016-04-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254288#comment-15254288
 ] 

Shalin Shekhar Mangar commented on SOLR-9030:
-

It exists to ensure that we do not update/overwrite a cluster state if we had 
no idea of its previous znode version. Also the default value of znode in a 
DocCollection is -1. If left unchecked, ZK will overwrite the value in the 
state without the CAS checks that we rely on.

bq. And shouldn't we expect that that can happen and deal with it 
appropriately? (A retry or something?)

Yes and it does recover automatically. A BadVersionException will cause the 
complete cluster state to be re-fetched from ZK and the operation is retried. 
In production environments, the BadVersionException will not be a problem but 
the overwriting of state can be.

> The 'downnode' command can trip asserts in ZkStateWriter or cause 
> BadVersionException in Overseer
> -
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: master, 6.1
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter or cause BadVersionException in Overseer

2016-04-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254262#comment-15254262
 ] 

Mark Miller commented on SOLR-9030:
---

Why does that assert even exist?

> The 'downnode' command can trip asserts in ZkStateWriter or cause 
> BadVersionException in Overseer
> -
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: master, 6.1
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7247) TestCoreParser.dumpResults verbose and test-fail logging tweaks

2016-04-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7247:

Attachment: LUCENE-7247.patch

> TestCoreParser.dumpResults verbose and test-fail logging tweaks
> ---
>
> Key: LUCENE-7247
> URL: https://issues.apache.org/jira/browse/LUCENE-7247
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Trivial
> Attachments: LUCENE-7247.patch
>
>
> To make it easier to investigate failing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7247) TestCoreParser.dumpResults verbose and test-fail logging tweaks

2016-04-22 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-7247:
---

 Summary: TestCoreParser.dumpResults verbose and test-fail logging 
tweaks
 Key: LUCENE-7247
 URL: https://issues.apache.org/jira/browse/LUCENE-7247
 Project: Lucene - Core
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Trivial


To make it easier to investigate failing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.5-Linux (64bit/jdk-9-ea+114) - Build # 209 - Failure!

2016-04-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/209/
Java: 64bit/jdk-9-ea+114 -XX:-UseCompressedOops -XX:+UseParallelGC

198 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.AnalysisAfterCoreReloadTest

Error Message:
Unable to access 'private final sun.nio.fs.UnixFileSystem 
sun.nio.fs.UnixPath.fs' to estimate memory usage

Stack Trace:
java.lang.IllegalStateException: Unable to access 'private final 
sun.nio.fs.UnixFileSystem sun.nio.fs.UnixPath.fs' to estimate memory usage
at __randomizedtesting.SeedInfo.seed([EFC0286B8592DCA6]:0)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:127)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make member 
of class sun.nio.fs.UnixPath accessible:  module java.base does not export 
sun.nio.fs to unnamed module @67c61efe
at 
sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
at 
java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
at 
java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
at java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
at java.security.AccessController.doPrivileged(java.base@9-ea/Native 
Method)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
... 13 more


FAILED:  junit.framework.TestSuite.org.apache.solr.ConvertedLegacyTest

Error Message:
Unable to access 'private final sun.nio.fs.UnixFileSystem 
sun.nio.fs.UnixPath.fs' to estimate memory usage

Stack Trace:
java.lang.IllegalStateException: Unable to access 'private final 
sun.nio.fs.UnixFileSystem sun.nio.fs.UnixPath.fs' to estimate memory usage
at __randomizedtesting.SeedInfo.seed([EFC0286B8592DCA6]:0)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:127)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: 

[jira] [Commented] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254237#comment-15254237
 ] 

Robert Muir commented on LUCENE-7239:
-

Just as a followup: the still-sluggish performance of the synthetic polys in 
the benchmarks here versus the "real" ones is mostly due to the actual 
luceneutil poly generation code in the benchmark itself: this is very slow. I 
will try to fix it so we can have a better understanding of when/where/how 
things degrade.

> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7239.patch, LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 2 - Still Failing

2016-04-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/2/

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([94FE21DB598E0A3B:8B44502C89EECCFE]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:136)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10638 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16562 - Still Failing!

2016-04-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16562/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Error from server at http://127.0.0.1:43960//collection1: 
java.lang.NullPointerException  at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:105)
  at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:753)
  at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:736)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:420)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:111)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:462)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:518)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
 at java.lang.Thread.run(Thread.java:745) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43960//collection1: 
java.lang.NullPointerException
at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:105)
at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:753)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:736)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:420)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:111)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 

[jira] [Reopened] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8838:


Reopening to port to 5x branch so we don't regress if there is a 5.6 release at 
some point.

> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8838:
---
Fix Version/s: 5.6

> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1, 5.6
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254155#comment-15254155
 ] 

Joel Bernstein edited comment on SOLR-9027 at 4/22/16 4:16 PM:
---

Basic implementation. Tests still needed.

It doesn't do anything fancy to build up the matching docs. The main thing that 
it adds is the maxDocFreq param which is the threshold for discarding query 
terms.

This essentially creates an on-the-fly stop list for high frequency nodes that 
appear during a graph traversal.


was (Author: joel.bernstein):
Basic implementation. Test still needed.

It doesn't do anything fancy to build up the matching docs. The main thing that 
it adds is the maxDocFreq param which is the threshold for discarding query 
terms.

This essentially creates an on-the-fly stop list for high frequency nodes that 
appear during a graph traversal.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9027:
-
Attachment: SOLR-9027.patch

Basic implementation. Test still needed.

It doesn't do anything fancy to build up the matching docs. The main thing that 
it adds is the maxDocFreq param which is the threshold for discarding query 
terms.

This essentially creates an on-the-fly stop list for high frequency nodes that 
appear during a graph traversal.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: EmbeddedSolrServer should set httpMethod on SolrQueryRequest before handling it.

2016-04-22 Thread Nicolas Gavalda
I just reported this issue a week ago, and included a patch (with the 
exact line you're providing, except your ".name()" is probably better 
than my ".toString()" ) : https://issues.apache.org/jira/browse/SOLR-8994.

Feel free to comment/vote on it when the JIRA lockdown is over!

Le 22/04/2016 16:52, Johannes Bauer a écrit :
In EmbeddedSolrServer.request(), a SolrQueryRequest (`req') is created 
and some of the information from the SolrRequest passed in as a 
parameter named `request' are copied from request to req, before req 
is passed as a parameter to core.execute().


One thing that is not copied from the SolrRequest to the 
SolrQueryRequest is the http method (POST/GET).  This leads to some 
requests being handled differently by the remote and the embedded 
server.  In particular, SchemaRequests simply return a JSON 
description of the schema instead of modifying it because they are 
treated as GET requests rather than POST requests by 
SchemaHandler.handleRequestBody().


Adding a single line:
 req.getContext().put("httpMethod", request.getMethod().name());

in EmbeddedSolrServer.request() fixes the problem for me.

I would have created a JIRA issue but it seems I can't create issues 
for the Solr project in the ASF JIRA: Solr simply doesn't appear in 
the list of projects on the issue creation dialog.


I'm not attaching a patch as the solution is trivial and I'm not sure 
this mailing list allows attachments.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7242.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8804) Race condition in ClusterStatus.getClusterStatus

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8804:
---
Fix Version/s: 5.5.1

> Race condition in ClusterStatus.getClusterStatus
> 
>
> Key: SOLR-8804
> URL: https://issues.apache.org/jira/browse/SOLR-8804
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Alexey Serba
>Assignee: Varun Thacker
>Priority: Trivial
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8804.patch, SOLR-8804.patch
>
>
> Reading cluster state information using {{/collections?action=CLUSTERSTATUS}} 
> can fail if there's a concurrent {{/collections?action=DELETE}} operation.
> The code in {{ClusterStatus.getClusterStatus}} 
> # gets collection names
> # for every collection reads its cluster state info using 
> {{ClusterState.getCollection}}
> The problem is that if there's a {{DELETE}} operation in between then 
> {{ClusterState.getCollection}} can fail thus causing the whole operation to 
> fail. It seems that it would be better to call 
> {{ClusterState.getCollectionOrNull}} and skip/ignore that collection if the 
> result is null.
> {noformat}
> 19:49:32.479 [qtp1531448569-881] ERROR org.apache.solr.core.SolrCore - 
> org.apache.solr.common.SolrException: Could not find collection : collection
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:165)
> at 
> org.apache.solr.handler.admin.ClusterStatus.getClusterStatus(ClusterStatus.java:110)
> at 
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$19.call(CollectionsHandler.java:614)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254149#comment-15254149
 ] 

ASF subversion and git services commented on LUCENE-7242:
-

Commit 3640244463c1c08b0bc97e9bd2f56a0bcd5e8ebe in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3640244 ]

LUCENE-7242: LatLonTree should build a balanced tree


> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8804) Race condition in ClusterStatus.getClusterStatus

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8804:


back porting for 5.5.1

> Race condition in ClusterStatus.getClusterStatus
> 
>
> Key: SOLR-8804
> URL: https://issues.apache.org/jira/browse/SOLR-8804
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Alexey Serba
>Assignee: Varun Thacker
>Priority: Trivial
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8804.patch, SOLR-8804.patch
>
>
> Reading cluster state information using {{/collections?action=CLUSTERSTATUS}} 
> can fail if there's a concurrent {{/collections?action=DELETE}} operation.
> The code in {{ClusterStatus.getClusterStatus}} 
> # gets collection names
> # for every collection reads its cluster state info using 
> {{ClusterState.getCollection}}
> The problem is that if there's a {{DELETE}} operation in between then 
> {{ClusterState.getCollection}} can fail thus causing the whole operation to 
> fail. It seems that it would be better to call 
> {{ClusterState.getCollectionOrNull}} and skip/ignore that collection if the 
> result is null.
> {noformat}
> 19:49:32.479 [qtp1531448569-881] ERROR org.apache.solr.core.SolrCore - 
> org.apache.solr.common.SolrException: Could not find collection : collection
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:165)
> at 
> org.apache.solr.handler.admin.ClusterStatus.getClusterStatus(ClusterStatus.java:110)
> at 
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$19.call(CollectionsHandler.java:614)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254147#comment-15254147
 ] 

ASF subversion and git services commented on LUCENE-7242:
-

Commit 776f9ec7c8f2a3a07c5ce5229c66c2f113291ba9 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=776f9ec ]

LUCENE-7242: LatLonTree should build a balanced tree


> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8789) CollectionAPISolrJTests is not run when running ant test

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8789:
---
Fix Version/s: 5.5.1

> CollectionAPISolrJTests is not run when running ant test
> 
>
> Key: SOLR-8789
> URL: https://issues.apache.org/jira/browse/SOLR-8789
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8789.patch
>
>
> The pattern that is used to run the tests on Jenkins (ant test) is (from 
> lucene/common-build.xml) :
> {code}
> 
> 
> {code}
> CollectionAPISolrJTests ends in an extra 's' and so is not executed. We need 
> to either fix the pattern or the test name to make sure that this test is run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8790) Add node name back to the core level responses in OverseerMessageHandler

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8790:
---
Fix Version/s: 5.5.1

> Add node name back to the core level responses in OverseerMessageHandler
> 
>
> Key: SOLR-8790
> URL: https://issues.apache.org/jira/browse/SOLR-8790
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8790-followup.patch, SOLR-8790.patch
>
>
> Continuing from SOLR-8789, now that this test runs, time to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8789) CollectionAPISolrJTests is not run when running ant test

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8789:

  Assignee: Anshum Gupta

backport for 5.5.1

> CollectionAPISolrJTests is not run when running ant test
> 
>
> Key: SOLR-8789
> URL: https://issues.apache.org/jira/browse/SOLR-8789
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8789.patch
>
>
> The pattern that is used to run the tests on Jenkins (ant test) is (from 
> lucene/common-build.xml) :
> {code}
> 
> 
> {code}
> CollectionAPISolrJTests ends in an extra 's' and so is not executed. We need 
> to either fix the pattern or the test name to make sure that this test is run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8790) Add node name back to the core level responses in OverseerMessageHandler

2016-04-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8790:

  Assignee: Anshum Gupta

backport for 5.5.1.

> Add node name back to the core level responses in OverseerMessageHandler
> 
>
> Key: SOLR-8790
> URL: https://issues.apache.org/jira/browse/SOLR-8790
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8790-followup.patch, SOLR-8790.patch
>
>
> Continuing from SOLR-8789, now that this test runs, time to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9014) Audit all usages of ClusterState methods which may make calls to ZK via the lazy collection reference

2016-04-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254121#comment-15254121
 ] 

Shalin Shekhar Mangar commented on SOLR-9014:
-

I found SOLR-9030 while working on this issue.

> Audit all usages of ClusterState methods which may make calls to ZK via the 
> lazy collection reference
> -
>
> Key: SOLR-9014
> URL: https://issues.apache.org/jira/browse/SOLR-9014
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: master, 6.1
>
>
> ClusterState has a bunch of methods such as getSlice and getReplica which 
> internally call getCollectionOrNull that ends up making a call to ZK via the 
> lazy collection reference. Many classes use these methods even though a 
> DocCollection object is available. In such cases, multiple redundant calls to 
> ZooKeeper can happen if the collection is not watched locally. This is 
> especially true for Overseer classes which operate on all collections.
> We should audit all usages of these methods and replace them with calls to 
> appropriate DocCollection methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-22 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254109#comment-15254109
 ] 

Ryan Ernst commented on LUCENE-7242:


Glad the idea worked out! +1

> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8998) JSON Facet API child roll-ups

2016-04-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15243133#comment-15243133
 ] 

Yonik Seeley edited comment on SOLR-8998 at 4/22/16 3:16 PM:
-

Although we don't need to implement everything all at once, we should be 
thinking ahead about everything we want to do.

h3. Existing block parent faceting example.
Incoming domain consists of children (reviews) who are then mapped to parents 
(books) before faceting is done:
{code}
q=type:review AND review_author:yonik
json.facet={
  genres : {
type : field,
field : genre,
domain: { blockParent : "type:book" }
  }
}
{code}

h3. Desirable features:
- ability to "pretend" that parent documents have all values of their child 
documents ( a union() set rollup?)
- numeric rollups (min, max, avg, etc) and the ability to use range faceting 
over these values
- an API that's sharable (to the degree that makes sense) with other places 
that need rollups (i.e. normal join)
- maximum "persistence" of rolled-up values... meaning they should be ideally 
usable in any context that other field values would be usable in.
  -- example: multiple levels of sub-facets operating on values that were 
rolled up at a higher level
  -- use in function queries
  -- use in a sort, or retrievable from topdocs (SOLR-7830)

h3. Ideas:
- we already have a syntax for rolling up values over a bucket (avg(field1), 
min(field2) etc), re-use that as much as possible
- we're going to need some sort of context based registry for information about 
rolled-up child documents (and/or about which fields were rolled up)

h3. Use case 1:

We have products, which have multiple SKUs, and we want to facet by color on 
the products.
{code}
Parent1: { type:product, name:"Solr T-Shirt" }
Child1: { type:SKU, size:L, color:Red, inStock:true}
Child2: { type:SKU, size:L, color:Blue, inStock:false}
Child3: { type:SKU, size:M, color:Red, inStock:true}
Child4: { type:SKU, size:S, color:Blue, inStock:true}
{code}
Now, we want to facet by "color" and get back numbers of products (not number 
of SKUs).  Hence if our query is inStock:true, we want Blue:1 and Red:1.
Put another way, we want a virtual "color" field on Parent1 containing all the 
colors of matching child documents.

h4. Use case 1a: input domain is children
Main query finds children, and hence the root faceting domain consists of 
children.  The block join is done in a facet via {code} 
domain:{blockParent="type:product"} {code}

h4. Usecase 1b, input domain is parents from previous block join
Main query selected products by including a blockJoin filter (mapping from 
children to parents).

h4. Use case 1c: input domain is parents, no previous block join
No previous block join (or an irrelevant one), but we still want to roll up 
children (all children, or a specific subset).

h3. Approach 1: specify rollups at the point of the join
Specify rollups where the child->parent join/mapping is being done.

Our basic child->parent mapping is currently specified by:
{code}
domain: { blockParent : "type:book" }
{code}
We could add rollup specifications to that in a number of different ways.
Reuse "blockParent" tag, but make it more structured, adding a "parentFilter" 
and then other rollups.
{code}
domain: { 
  blockParent : {
parentFilter : "type:book",
average_rating : "avg(rating)" 
  }
}
{code}
Downside: name collisions... say you wanted to name a rollup the same name as 
something like "parentFilter"
Advantages: flatter structure is simpler, and since we chose rollup names, the 
namespace issue is likely just academic.

Or, we could have a specific "rollups" tag if a unique namespace is desired:
{code}
domain: { 
  blockParent : {
parentFilter : "type:book",
rollups: {
  average_rating : "avg(rating)"
} 
  }
}
{code}

h4. Use of specified rollups:
{code}
q=type:review AND review_year:2016
json.facet={
  genres : {
type : field,
field : genre,
domain: { 
  blockParent : {
parentFilter : "type:book",
book_rating : "avg(review_rating)"
  }
},
facet : {
   // things in here are calculated per-bucket of the parent facet
   avg_rating : "avg(book_rating)",
   min_rating : "min(book_rating)"
},
sort : "avg_rating desc"
  }
}
{code}

h3. Approach 2: refer to children from the POV of the parent later
This approach does not explicitly specify any rollups at the point of the join, 
but lets one specify them later by referring to child fields using something 
like child..
Or perhaps even  (related to SOLR-7672)
Or as a function: child(child_field_name)

{code}
q=type:review AND review_year:2016
json.facet={
  genres : {
type : field,
field : genre,
domain: { blockParent :  "type:book" }
facet : {
   // things in here are calculated per-bucket of the parent 

[jira] [Commented] (LUCENE-7246) Can LRUQueryCache reuse DocIdSets that are created by some queries anyway?

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254078#comment-15254078
 ] 

Robert Muir commented on LUCENE-7246:
-

I see, I agree it is strange for an iterator. must it really be per-DISI thing? 
that makes things confusing (and I agree we should avoid adding impl details to 
the public api).

Why can't it be a thing on Weight somehow?

> Can LRUQueryCache reuse DocIdSets that are created by some queries anyway?
> --
>
> Key: LUCENE-7246
> URL: https://issues.apache.org/jira/browse/LUCENE-7246
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7246.patch
>
>
> Some queries need to create a DocIdSet to work. This is for instance the case 
> with TermsQuery, multi-term queries, point-in-set queries and point range 
> queries. We cache them more aggressively because these queries need to 
> evaluate all matches on a segment before they can return a Scorer. But this 
> can also be dangerous: if there is little reuse, then we keep converting the 
> doc id sets that these queries create to another DocIdSet.
> This worries me a bit eg. for point range queries: they made numeric ranges 
> faster in practice so I would not like caching to make them appear slower 
> than they are when caching is disabled.
> So I would like to somehow bring back the optimization that we had in 1.x 
> with DocIdSet.isCacheable so that we do not need to convert DocIdSet 
> instances when we could just reuse existing instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+114) - Build # 476 - Failure!

2016-04-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/476/
Java: 64bit/jdk-9-ea+114 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/rme", "path":"/test1", "httpMethod":"GET"},  
 "class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/rme",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([14CF6757A2762D0F:CC824A0055AB88AF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-7246) Can LRUQueryCache reuse DocIdSets that are created by some queries anyway?

2016-04-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7246:
-
Attachment: LUCENE-7246.patch

Here is one way of doing it: it adds a new optional method 
DocIdSetIterator.getDocIdSet() that can be used to get back a DocIdSet that can 
regenerate the same iterator.

In case this method does not feel right, another option I was thinking about 
would be to just specialize the BitSet case with instanceof calls without 
adding a method. It would only work in the dense case (I would like to avoid 
making IntArrayDocIdSet, which is used in the sparse case, public) but maybe 
this is fine since doc id sets that take the most time converting are the dense 
ones.

> Can LRUQueryCache reuse DocIdSets that are created by some queries anyway?
> --
>
> Key: LUCENE-7246
> URL: https://issues.apache.org/jira/browse/LUCENE-7246
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7246.patch
>
>
> Some queries need to create a DocIdSet to work. This is for instance the case 
> with TermsQuery, multi-term queries, point-in-set queries and point range 
> queries. We cache them more aggressively because these queries need to 
> evaluate all matches on a segment before they can return a Scorer. But this 
> can also be dangerous: if there is little reuse, then we keep converting the 
> doc id sets that these queries create to another DocIdSet.
> This worries me a bit eg. for point range queries: they made numeric ranges 
> faster in practice so I would not like caching to make them appear slower 
> than they are when caching is disabled.
> So I would like to somehow bring back the optimization that we had in 1.x 
> with DocIdSet.isCacheable so that we do not need to convert DocIdSet 
> instances when we could just reuse existing instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



EmbeddedSolrServer should set httpMethod on SolrQueryRequest before handling it.

2016-04-22 Thread Johannes Bauer
In EmbeddedSolrServer.request(), a SolrQueryRequest (`req') is created 
and some of the information from the SolrRequest passed in as a 
parameter named `request' are copied from request to req, before req is 
passed as a parameter to core.execute().


One thing that is not copied from the SolrRequest to the 
SolrQueryRequest is the http method (POST/GET).  This leads to some 
requests being handled differently by the remote and the embedded 
server.  In particular, SchemaRequests simply return a JSON description 
of the schema instead of modifying it because they are treated as GET 
requests rather than POST requests by SchemaHandler.handleRequestBody().


Adding a single line:
 req.getContext().put("httpMethod", request.getMethod().name());

in EmbeddedSolrServer.request() fixes the problem for me.

I would have created a JIRA issue but it seems I can't create issues for 
the Solr project in the ASF JIRA: Solr simply doesn't appear in the list 
of projects on the issue creation dialog.


I'm not attaching a patch as the solution is trivial and I'm not sure 
this mailing list allows attachments.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9027:
-
Issue Type: New Feature  (was: Bug)

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7245) Automatic warm-up of the query cache on new segments

2016-04-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254026#comment-15254026
 ] 

Robert Muir commented on LUCENE-7245:
-

seems worth a try!!!

> Automatic warm-up of the query cache on new segments
> 
>
> Key: LUCENE-7245
> URL: https://issues.apache.org/jira/browse/LUCENE-7245
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7245.patch
>
>
> Thanks to the fact that we track recently-used queries, we know which ones 
> are likely to be reused and we could use this information in order to 
> automatically warm up the query cache on new segments (typically after a 
> refresh after a merge).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7246) Can LRUQueryCache reuse DocIdSets that are created by some queries anyway?

2016-04-22 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7246:


 Summary: Can LRUQueryCache reuse DocIdSets that are created by 
some queries anyway?
 Key: LUCENE-7246
 URL: https://issues.apache.org/jira/browse/LUCENE-7246
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


Some queries need to create a DocIdSet to work. This is for instance the case 
with TermsQuery, multi-term queries, point-in-set queries and point range 
queries. We cache them more aggressively because these queries need to evaluate 
all matches on a segment before they can return a Scorer. But this can also be 
dangerous: if there is little reuse, then we keep converting the doc id sets 
that these queries create to another DocIdSet.

This worries me a bit eg. for point range queries: they made numeric ranges 
faster in practice so I would not like caching to make them appear slower than 
they are when caching is disabled.

So I would like to somehow bring back the optimization that we had in 1.x with 
DocIdSet.isCacheable so that we do not need to convert DocIdSet instances when 
we could just reuse existing instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >