Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Noble Paul
I couldn't reproduce it with repeated runs. This is a mock test and the
failure says , "no SolrAuth header present, which will not happen in a real
test. it has to be something with the mock itself. This should not hold up
the release

On Mon, May 2, 2016 at 9:59 PM, Noble Paul  wrote:

> I shall dig into this
>
> On Mon, May 2, 2016 at 9:17 PM, Yonik Seeley  wrote:
>
>> +1
>>
>> -Yonik
>>
>>
>> On Sat, Apr 30, 2016 at 5:25 PM, Anshum Gupta 
>> wrote:
>> > Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
>> >
>> > Artifacts:
>> >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>> >
>> > Smoke tester:
>> >
>> >   python3 -u dev-tools/scripts/smokeTestRelease.py
>> >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>> >
>> >
>> > Here's my +1:
>> >
>> > SUCCESS! [0:26:44.452268]
>> >
>> > --
>> > Anshum Gupta
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> -
> Noble Paul
>



-- 
-
Noble Paul


[jira] [Assigned] (SOLR-8792) ZooKeeper ACL support broken

2016-05-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-8792:


Assignee: Steve Rowe

> ZooKeeper ACL support broken
> 
>
> Key: SOLR-8792
> URL: https://issues.apache.org/jira/browse/SOLR-8792
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication, documentation
>Affects Versions: 5.0
>Reporter: Esther Quansah
>Assignee: Steve Rowe
>  Labels: acl, authentication, security, zkcli, zkcli.sh, zookeeper
> Fix For: 6.1
>
> Attachments: SOLR-8792.patch, SOLR-8792.patch
>
>
> The documentation presented here: 
> https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control
> details the process of securing Solr content in ZooKeeper using ACLs. In the 
> example usages, it is mentioned that access to zkcli can be restricted by 
> adding credentials to the zkcli.sh script in addition to adding the 
> appropriate classnames to solr.xml. With the scripts in zkcli.sh, another 
> machine should not be able to read or write from the host ZK without the 
> necessary credentials. At this time, machines are able to read/write from the 
> host ZK with or without these credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8792) ZooKeeper ACL support broken

2016-05-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-8792:
-
Attachment: SOLR-8792.patch

Patch adding support for Windows and Solr's zkcli scripts.

I'll do some manual testing before I commit.

> ZooKeeper ACL support broken
> 
>
> Key: SOLR-8792
> URL: https://issues.apache.org/jira/browse/SOLR-8792
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication, documentation
>Affects Versions: 5.0
>Reporter: Esther Quansah
>  Labels: acl, authentication, security, zkcli, zkcli.sh, zookeeper
> Fix For: 6.1
>
> Attachments: SOLR-8792.patch, SOLR-8792.patch
>
>
> The documentation presented here: 
> https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control
> details the process of securing Solr content in ZooKeeper using ACLs. In the 
> example usages, it is mentioned that access to zkcli can be restricted by 
> adding credentials to the zkcli.sh script in addition to adding the 
> appropriate classnames to solr.xml. With the scripts in zkcli.sh, another 
> machine should not be able to read or write from the host ZK without the 
> necessary credentials. At this time, machines are able to read/write from the 
> host ZK with or without these credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 23 - Failure

2016-05-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/23/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([A5C4C09C03B2B700]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12245 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java8/solr/build/solr-core/test/J0/temp/solr.schema.TestManagedSchemaAPI_A5C4C09C03B2B700-001/init-core-data-001
   [junit4]   2> 1895328 INFO  
(SUITE-TestManagedSchemaAPI-seed#[A5C4C09C03B2B700]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 1895330 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 1895330 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1895331 INFO  (Thread-9027) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1895331 INFO  (Thread-9027) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1895431 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.c.ZkTestServer start zk server on port:53734
   [junit4]   2> 1895431 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1895432 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1895434 INFO  (zkCallback-1923-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@202d6979 
name:ZooKeeperConnection Watcher:127.0.0.1:53734 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1895434 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[A5C4C09C03B2B700]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1895434 

[jira] [Resolved] (SOLR-5750) Backup/Restore API for SolrCloud

2016-05-02 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-5750.

   Resolution: Fixed
 Assignee: Varun Thacker  (was: David Smiley)
Fix Version/s: (was: 5.2)
   (was: master)
   6.1

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 6.1
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268021#comment-15268021
 ] 

ASF subversion and git services commented on SOLR-5750:
---

Commit dac044c94a33ebd655c1d5f5c628c83c75bf8697 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dac044c ]

SOLR-5750: Add /admin/collections?action=BACKUP and RESTORE
(cherry picked from commit 70bcd56)


> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 112 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/112/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([25E751FED53FC993:4154093868D14F68]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery(TestMoreLikeThis.java:320)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 7666 lines...]
   [junit4] Suite: org.apache.lucene.queries.mlt.TestMoreLikeThis
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestMoreLikeThis 
-Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
-Dtests.seed=25E751FED53FC993 -Dtests.slow=true -Dtests.locale=sq-AL 
-Dtests.timezone=Indian/Christmas -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.11s J0 | 
TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 

Re: [JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 557 - Still Failing!

2016-05-02 Thread David Smiley
I pushed a fix.  The DocIdSetBuilder in IntersectsRPTVerifyQuery should be
lazy initialized in start() which is when the terms field will be non-null.

On Mon, May 2, 2016 at 4:28 PM David Smiley 
wrote:

> Probably related to LUCENE-7262
>
>
> On Mon, May 2, 2016 at 4:23 PM Policeman Jenkins Server <
> jenk...@thetaphi.de> wrote:
>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/557/
>> Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseSerialGC
>>
>> 1 tests failed.
>> FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField
>>
>> Error Message:
>>
>>
>> Stack Trace:
>> java.lang.NullPointerException
>> at
>> __randomizedtesting.SeedInfo.seed([F87E22DEFDCE0F79:DE699A28C09F561B]:0)
>> at
>> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
>> at
>> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
>> at
>> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
>> at
>> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
>> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>> at
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>> at
>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>> at
>> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
>> at
>> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
>> at
>> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
>> at
>> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
>> at
>> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
>> at
>> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>> at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
>> at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>> at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>> at
>> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
>> at
>> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
>> at
>> org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>> at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>> at
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>> at
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>> at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>> at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>> 

[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267972#comment-15267972
 ] 

ASF subversion and git services commented on LUCENE-7262:
-

Commit 5b51479b69ec3c52e42c9b95418ee285080311f7 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5b51479 ]

LUCENE-7262: Fix NPE, this should lazy-init in start()
(cherry picked from commit 91153b9)


> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: LUCENE-7262.patch, LUCENE-7262.patch, LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+116) - Build # 559 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/559/
Java: 32bit/jdk-9-ea+116 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
Index 0 out-of-bounds for length 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index 0 out-of-bounds for length 0
at 
__randomizedtesting.SeedInfo.seed([AA7944D4175172E5:5D0AAA8CD1B9DD03]:0)
at java.util.Objects.outOfBounds(java.base@9-ea/Objects.java:376)
at 
java.util.Objects.outOfBoundsCheckIndex(java.base@9-ea/Objects.java:386)
at java.util.Objects.checkIndex(java.base@9-ea/Objects.java:593)
at java.util.Objects.checkIndex(java.base@9-ea/Objects.java:543)
at java.util.ArrayList.get(java.base@9-ea/ArrayList.java:435)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1250)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267967#comment-15267967
 ] 

ASF subversion and git services commented on LUCENE-7262:
-

Commit 91153b9627d7bd9e17dcb4762ebbaf26bc3410f4 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=91153b9 ]

LUCENE-7262: Fix NPE, this should lazy-init in start()


> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: LUCENE-7262.patch, LUCENE-7262.patch, LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3244 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3244/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([18FC0C74E3CF3171:3EEBB482DE9E6813]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:292)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2016)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:851)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:820)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1119 - Failure

2016-05-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1119/

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D72543C495094F53:F132FB32A8581631]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:292)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2016)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:851)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:820)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267951#comment-15267951
 ] 

ASF subversion and git services commented on SOLR-5750:
---

Commit 70bcd562f98ede21dfc93a1ba002c61fac888b29 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=70bcd56 ]

SOLR-5750: Add /admin/collections?action=BACKUP and RESTORE


> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 250 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/250/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:45956/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45956/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([6FF4ACCEB03EB5E5:E7A093141EC2D81D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-5.5-Windows (32bit/jdk1.7.0_80) - Build # 67 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/67/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:62488/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:62488/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([D653EF9AC18940A:8531012302E4F9F2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-05-02 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267808#comment-15267808
 ] 

Hrishikesh Gadre commented on SOLR-9055:


>>However if we want to keep things simple, we can choose to not provide 
>>separate APIs to configure "repositories". Instead we can just pick the same 
>>file-system used to store the indexed data. That means in case of local 
>>file-system, the backup will be stored on shared file-system using 
>>SimpleFSDirectory implementation AND for HDFS we will use HdfsDirectory impl. 
>>Make sense?

I think the main problem here is identifying type of file-system used for a 
given collection at the Overseer (The solr core on the other hand already has a 
Directory factory reference. So we can instantiate appropriate directory in the 
snapshooter). 

> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16644 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16644/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:45722/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45722/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([E98FDCAFE118D437:61DBE3754FE4B9CF]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-02 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267798#comment-15267798
 ] 

Shikha Somani commented on SOLR-8297:
-

Added test case to verify distributed join when secondary shard is not singly 
sharded but is equally sharded as primary. PR is ready for merge.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 180 - Failure

2016-05-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/180/

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([11E30B2D68347860:37F4B3DB55652102]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 151 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/151/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([44B6183E0E121641:62A1A0C833434F23]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (LUCENE-7269) TestPointQueries failures

2016-05-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267724#comment-15267724
 ] 

Steve Rowe edited comment on LUCENE-7269 at 5/2/16 11:34 PM:
-

This code - added in LUCENE-7262 - is where the assertion is tripped:

{code:java|title=DocIdSetBuilder.java}
101:  DocIdSetBuilder(int maxDoc, int docCount, long valueCount) {
102:this.maxDoc = maxDoc;
103:this.multivalued = docCount < 0 || docCount != valueCount;
104:this.numValuesPerDoc = (docCount < 0 || valueCount < 0)
105:// assume one value per doc, this means the cost will be 
overestimated
106:// if the docs are actually multi-valued
107:? 1
108:   // otherwise compute from index stats
109:: (double) valueCount / docCount;
110:assert numValuesPerDoc >= 1;
{code}

The same assertion was tripped in {{TestPointQueries.testRandomLongsTiny()}} on 
Policeman Jenkins 
[http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16643/]:

{noformat}
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPointQueries 
-Dtests.method=testRandomLongsTiny -Dtests.seed=FFA825CE713FFF2F 

[jira] [Comment Edited] (LUCENE-7269) TestPointQueries failures

2016-05-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267724#comment-15267724
 ] 

Steve Rowe edited comment on LUCENE-7269 at 5/2/16 11:31 PM:
-

This code - added in LUCENE-7262 - is where the assertion is tripped:

{code:java|title=DocIdSetBuilder.java}
101:  DocIdSetBuilder(int maxDoc, int docCount, long valueCount) {
102:this.maxDoc = maxDoc;
103:this.multivalued = docCount < 0 || docCount != valueCount;
104:this.numValuesPerDoc = (docCount < 0 || valueCount < 0)
105:// assume one value per doc, this means the cost will be 
overestimated
106:// if the docs are actually multi-valued
107:? 1
108:   // otherwise compute from index stats
109:: (double) valueCount / docCount;
110:assert numValuesPerDoc >= 1;
{code}

The same assertion was tripped in {{TestPointQueries.testRandomLongsTiny()}} on 
ASF Jenkins [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16643/]:

{noformat}
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPointQueries 
-Dtests.method=testRandomLongsTiny -Dtests.seed=FFA825CE713FFF2F 

[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-05-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267727#comment-15267727
 ] 

Steve Rowe commented on LUCENE-7262:


TestPointQueries failures reported on LUCENE-7269 appear to be related to this 
issue.

> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: LUCENE-7262.patch, LUCENE-7262.patch, LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7269) TestPointQueries failures

2016-05-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7269:
---
Summary: TestPointQueries failures  (was: 
TestPointQueries.testRandomBinaryTiny() failure)

> TestPointQueries failures
> -
>
> Key: LUCENE-7269
> URL: https://issues.apache.org/jira/browse/LUCENE-7269
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master
>Reporter: Steve Rowe
>
> My Jenkins found a reproducing seed on master:
> {noformat}
> Checking out Revision a48245a1bfbef0259d38ef36fec814f3891ab80c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: org.apache.lucene.search.TestPointQueries
>[junit4] IGNOR/A 0.00s J1 | TestPointQueries.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T0,5,TGRP-TestPointQueries]
>[junit4]   2> java.lang.AssertionError
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
>[junit4]   2>  at 
> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
>[junit4]   2> 
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T1,5,TGRP-TestPointQueries]
>[junit4]   2> java.lang.AssertionError
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
>[junit4]   2>  at 
> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
>[junit4]   2> 
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T3,5,TGRP-TestPointQueries]
>

[jira] [Comment Edited] (LUCENE-7269) TestPointQueries.testRandomBinaryTiny() failure

2016-05-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267724#comment-15267724
 ] 

Steve Rowe edited comment on LUCENE-7269 at 5/2/16 11:27 PM:
-

This code - added in LUCENE-7262 - is where the assertion is tripped:

{code:java|title=DocIdSetBuilder.java}
101:  DocIdSetBuilder(int maxDoc, int docCount, long valueCount) {
102:this.maxDoc = maxDoc;
103:this.multivalued = docCount < 0 || docCount != valueCount;
104:this.numValuesPerDoc = (docCount < 0 || valueCount < 0)
105:// assume one value per doc, this means the cost will be 
overestimated
106:// if the docs are actually multi-valued
107:? 1
108:   // otherwise compute from index stats
109:: (double) valueCount / docCount;
110:assert numValuesPerDoc >= 1;
{code}

The same assertion tripped was tripped in 
{{TestPointQueries.testRandomLongsTiny()}} on ASF Jenkins 
[http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16643/]:

{noformat}
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPointQueries 
-Dtests.method=testRandomLongsTiny -Dtests.seed=FFA825CE713FFF2F 

[jira] [Commented] (LUCENE-7269) TestPointQueries.testRandomBinaryTiny() failure

2016-05-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267724#comment-15267724
 ] 

Steve Rowe commented on LUCENE-7269:


This code - added in SOLR-7282 - is where the assertion is tripped:

{code:java|title=DocIdSetBuilder.java}
101:  DocIdSetBuilder(int maxDoc, int docCount, long valueCount) {
102:this.maxDoc = maxDoc;
103:this.multivalued = docCount < 0 || docCount != valueCount;
104:this.numValuesPerDoc = (docCount < 0 || valueCount < 0)
105:// assume one value per doc, this means the cost will be 
overestimated
106:// if the docs are actually multi-valued
107:? 1
108:   // otherwise compute from index stats
109:: (double) valueCount / docCount;
110:assert numValuesPerDoc >= 1;
{code}

The same assertion tripped was tripped in 
{{TestPointQueries.testRandomLongsTiny()}} on ASF Jenkins 
[http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16643/]:

{noformat}
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPointQueries 
-Dtests.method=testRandomLongsTiny -Dtests.seed=FFA825CE713FFF2F 
-Dtests.multiplier=3 -Dtests.slow=true 

[jira] [Created] (LUCENE-7269) TestPointQueries.testRandomBinaryTiny() failure

2016-05-02 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7269:
--

 Summary: TestPointQueries.testRandomBinaryTiny() failure
 Key: LUCENE-7269
 URL: https://issues.apache.org/jira/browse/LUCENE-7269
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: master
Reporter: Steve Rowe


My Jenkins found a reproducing seed on master:

{noformat}
Checking out Revision a48245a1bfbef0259d38ef36fec814f3891ab80c 
(refs/remotes/origin/master)
[...]
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4] IGNOR/A 0.00s J1 | TestPointQueries.testRandomBinaryBig
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> maj 02, 2016 3:29:13 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
   [junit4]   2> 
   [junit4]   2> maj 02, 2016 3:29:13 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
   [junit4]   2> 
   [junit4]   2> maj 02, 2016 3:29:13 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T3,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2> 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 558 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/558/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([60F6AAA8EDC29D99:46E1125ED093C4FB]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5813 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5813/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([7D2E1CA0312FBB6A:5B39A4560C7EE208]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:292)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2016)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:851)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:820)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7475) Value of Heap Memory Usage display

2016-05-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267615#comment-15267615
 ] 

Shawn Heisey commented on SOLR-7475:


Even when this was working, it was showing a very low number.  What exactly is 
this number intended to reflect?  It definitely isn't the total memory 
allocated for everything related to the core, which IMHO makes the number 
useless to most Solr users.

I've got a Solr 4.7.2 server with four cores that show non-zero heap memory -- 
all the other cores show zero.  Here are those numbers:

51667601
122208433
97413812
97369157

The jconsole memory graph for this server shows the heap allocation bouncing 
between 5GB and 7GB.  If I click the "Perform GC" button in jconsole, then the 
heap drops to a little over 3GB.  Adding up all the "heap memory" numbers on 
the core overview pages reaches less than 400MB ... definitely nowhere near the 
total heap usage, even with all garbage removed.

In my opinion there are three viable options:
 * Rename the statistic to reflect what's actually being counted (probably very 
low-level Lucene structures)
 * Remove it entirely
 * Fix it so it counts all heap memory used by the core -- including Solr 
caches and other large memory structures.


> Value of Heap Memory Usage display
> --
>
> Key: SOLR-7475
> URL: https://issues.apache.org/jira/browse/SOLR-7475
> Project: Solr
>  Issue Type: Bug
>  Components: UI, web gui
>Affects Versions: 5.0
> Environment: Windows 7 operating system, Solr-5.0, zookeeper-3.4.6
>Reporter: Yeo Zheng Lin
>  Labels: memory, solr, ui
> Attachments: Heap Memory Usage.png
>
>
> In the Solr-5.0 admin UI, select a collection, click on Overview. This will 
> show the statistics of the collection. 
> For the Heap Memory Usage, it is showing the value -1 instead of the Heap 
> Memory Usage for that collection. It was said to be working on the previous 
> versions of Solr, and in version 5.0 it was showing -1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9034) Atomic updates not work with CopyField

2016-05-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9034:
-
Fix Version/s: 6.1

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-05-02 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267585#comment-15267585
 ] 

Hrishikesh Gadre commented on SOLR-9055:


>>Nitpick: In your editor, if it has this feature (IntelliJ does), configure it 
>>to strip trailing whitespace only on Modified Lines. IntelliJ: 
>>Editor/General/Other "Strip trailing spaces on Save".

Sorry about that. Let me resubmit the patch without this noise.

>>Does this API enable the possibility of a hard-link based copy (applicable 
>>for both backup & restore). It doesn't seem so but I'm unsure?

The current "IndexBackupStrategy" API works at the Overseer level and not at 
the "core" level. Since "hard-link" based copy needs to be done at the "core" 
level, it doesn't handle this use-case.

>>Before committing to this API, it would be good to have it implement 
>>something useful (HDFS or whatever), otherwise we very well may miss problems 
>>with the API – we probably will. I'm not saying this issue needs to implement 
>>HDFS, but at least the proposed patch might have an implementation specific 
>>part in some separate files that wouldn't be committed with this issue. I 
>>suppose this isn't strictly required but it would help.

My primary motivation was just to make the code modular (instead of having one 
gigantic method incorporating all logic). But I agree that delaying the 
interface definition would probably be better. So I can remove the 
"IndexBackupStrategy" interface and have BackupManager use "CopyFilesStrategy" 
by default. Would that be sufficient?


> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9054) The new GUI is using hardcoded paths

2016-05-02 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267571#comment-15267571
 ] 

Shawn Heisey commented on SOLR-9054:


bq. The location where Solr is mounted is not to be considered a 
user-configurable value

+1 from me.

Echoing my comment on SOLR-9000: I'm all for configurability ... but if 
*everything* is configurable, our job as developers gets a lot harder.

I once read a really interesting message on the OpenNMS user list that comes to 
mind on this issue.  Somebody wanted to know why the IP address of a node was 
used as the primary key in the database, instead of something more useful to 
them, like a hostname, and I thought the response was particularly insightful 
regarding foundations and flexibility:

https://sourceforge.net/p/opennms/mailman/message/18840361/


> The new GUI is using hardcoded paths
> 
>
> Key: SOLR-9054
> URL: https://issues.apache.org/jira/browse/SOLR-9054
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 6.0
>Reporter: Valerio Di Cagno
>
> If Apache Solr 6.0 is started without using the default context root "/solr"
> every admin service will not work properly and is not possible to use the 
> provided links to go back to the old GUI.
> In the javascript files sometimes the parameter config.solr_path is ignored
> or replaced with the value /solr returning 404 on access.
> Affected files: 
> solr-webapp/webapp/js/services.js
> Suggested solution:
> Complete the integration with /js/scripts/app.js



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Refactoring effects

2016-05-02 Thread Miles Teg
Hello Everyone,

I'm part of a team trying to analyse the effects of refactoring on large
codebases. In this regard, we are analysing the Lucene/Solr project and its
JIRA tickets.

We would like to know:
i) Do you follow a particular conventions for changes that are
refactorings: e.g. special ticket type or commit messages
ii) Are there specific tickets that are examples of large-scale
refactorings that were done with the intention of improving maintainability.

Would appreciate any pointers in this regard :)

Thank you in advance,
Miles


[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-05-02 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Description: 
The initial idea was to return "from" side of query time join via 
doctransformer. I suppose it isn't  query-time join specific, thus let to 
specify any query and parameters for them, let's call it sub-query. But it 
might be problematic to escape subquery parameters, including local ones, e.g. 
what if subquery needs to specify own doctransformer in =\[..\] ?
I suppose we can specify subquery parameter prefix:
{code}
..=name_s:john=*,depts:[subquery fromIndex=departments]&
depts.q={!term f=dept_id_s 
v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id desc
{code}   
response is like
{code}   

...


1
john
..


Engineering
These guys develop stuff


Support
These guys help users





{code}   

* {{fl=depts:\[subquery]}} executes a separate request for every query result 
row, and adds it into a document as aseparate result list. The given field name 
(here it's 'depts') is used as a prefix to shift subquery parameters from main 
query parameter, eg {{depts.q}} turns to {{q}} for subquery, {{depts.rows}} to 
{{rows}}.
* document fields are available as implicit parameters with prefix {{row.}} eg. 
if result document has a field {{dept_id}} it can be referred as 
{{v=$row.dept_id}} this combines well with \{!terms} query parser   
* {{separator=','}} is used when multiple field values are combined in 
parameter. eg. a document has multivalue field {code}dept_ids={2,3}{code}, thus 
referring to it via {code}..={!terms f=id v=$row.dept_ids}&..{code} 
executes a subquery {code}{!terms f=id}2,3{code}. When omitted  it's a comma. 
* {{fromIndex=othercore}} optional param allows to run subquery on other core, 
like it works on query time join
However, it doesn't work on cloud setup (and will let you know), but it's 
proposed to use regular params ({{collection}}, {{shards}} - whatever, with 
subquery prefix as below ) to issue subquery to a collection
{code}
q=name_s:dave=true=*,depts:[subquery]=20&
depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
depts.indent=true&
depts.collection=departments&
depts.rows=10=q,fl,rows,row.dept_ss_dv
{code}

Caveat: it should be a way slow; it handles only search result page, not entire 
result set. 

  was:
The initial idea was to return "from" side of query time join via 
doctransformer. I suppose it isn't  query-time join specific, thus let to 
specify any query and parameters for them, let's call them sub-query. But it 
might be problematic to escape subquery parameters, including local ones, e.g. 
what if subquery needs to specify own doctransformer in =\[..\] ?
I suppose we can allow to specify subquery parameter prefix:
{code}
..=name_s:john=*,depts:[subquery fromIndex=departments]&
depts.q={!term f=dept_id_s 
v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id desc
{code}   
response is like
{code}   

...


1
john
..


Engineering
These guys develop stuff


Support
These guys help users





{code}   


* {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
{{q}} for subquery, {{subq1.rows}} to {{rows}}
* {{separator=','}} 
* {{fromIndex=othercore}} optional param allows to run subquery on other core, 
like it works on query time join
However, it doesn't work on cloud setup (and let you know), but it's proposed 
to regular params (collection, shards - whatever, with subquery prefix as below 
) to issue subquery to a collection
{code}
q=name_s:dave=true=*,depts:[subquery]=20&
depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
depts.indent=true&
depts.collection=departments&
depts.rows=10=q,fl,rows,row.dept_ss_dv
{code}

* the itchiest one is to reference to document field from subquery parameters, 
here I propose to use local param {{v}} and param deference {{v=$param}} thus 
every document field implicitly introduces parameter for subquery 
$\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
q=child_id:, presumably we can drop "row." in the middle 
(reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
* \[subquery\], or \[query\], or ? 

Caveat: it should be a way slow; it handles only search result page, not entire 
result set. 


> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: 

[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267510#comment-15267510
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 45feaf3f88b99ccd561b47d3b8d82dda6655bcc3 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=45feaf3 ]

LUCENE-7241: Fix intersection bounding so we don't get spurious non-matching 
crossings.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-05-02 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267507#comment-15267507
 ] 

Hrishikesh Gadre commented on SOLR-9055:


>>I have a general question about HDFS; I have no real experience with it: I 
>>wonder if Java's NIO file abstractions could be used so we don't have to have 
>>separate code? If so it would be wonderful – simpler and less code to 
>>maintain. See https://github.com/damiencarol/jsr203-hadoop What do you think?

Although integrating HDFS and Java NIO API sounds interesting, I would prefer 
if it is directly provided by [HDFS client 
library|https://issues.apache.org/jira/browse/HADOOP-3518] as against a third 
party library which may/may not be supported in future. Also since Solr 
provides a HDFS backed Directory implementation, it probably make sense to 
reuse it.

However if we want to keep things simple, we can choose to not provide separate 
APIs to configure "repositories". Instead we can just pick the same file-system 
used to store the indexed data. That means in case of local file-system, the 
backup will be stored on shared file-system using SimpleFSDirectory 
implementation AND for HDFS we will use HdfsDirectory impl. Make sense?


> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-05-02 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Description: 
The initial idea was to return "from" side of query time join via 
doctransformer. I suppose it isn't  query-time join specific, thus let to 
specify any query and parameters for them, let's call them sub-query. But it 
might be problematic to escape subquery parameters, including local ones, e.g. 
what if subquery needs to specify own doctransformer in =\[..\] ?
I suppose we can allow to specify subquery parameter prefix:
{code}
..=name_s:john=*,depts:[subquery fromIndex=departments]&
depts.q={!term f=dept_id_s 
v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id desc
{code}   
response is like
{code}   

...


1
john
..


Engineering
These guys develop stuff


Support
These guys help users





{code}   


* {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
{{q}} for subquery, {{subq1.rows}} to {{rows}}
* {{separator=','}} 
* {{fromIndex=othercore}} optional param allows to run subquery on other core, 
like it works on query time join
However, it doesn't work on cloud setup (and let you know), but it's proposed 
to regular params (collection, shards - whatever, with subquery prefix as below 
) to issue subquery to a collection
{code}
q=name_s:dave=true=*,depts:[subquery]=20&
depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
depts.indent=true&
depts.collection=departments&
depts.rows=10=q,fl,rows,row.dept_ss_dv
{code}

* the itchiest one is to reference to document field from subquery parameters, 
here I propose to use local param {{v}} and param deference {{v=$param}} thus 
every document field implicitly introduces parameter for subquery 
$\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
q=child_id:, presumably we can drop "row." in the middle 
(reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
* \[subquery\], or \[query\], or ? 

Caveat: it should be a way slow; it handles only search result page, not entire 
result set. 

  was:
The initial idea was to return "from" side of query time join via 
doctransformer. I suppose it isn't  query-time join specific, thus let to 
specify any query and parameters for them, let's call them sub-query. But it 
might be problematic to escape subquery parameters, including local ones, e.g. 
what if subquery needs to specify own doctransformer in =\[..\] ?
I suppose we can allow to specify subquery parameter prefix:
{code}
..=id,[subquery paramPrefix=subq1. 
fromIndex=othercore],score,..={!term f=child_id 
v=$subq1.row.id}=3=price&..
{code}   
* {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
{{q}} for subquery, {{subq1.rows}} to {{rows}}
* {{fromIndex=othercore}} optional param allows to run subquery on other core, 
like it works on query time join
* the itchiest one is to reference to document field from subquery parameters, 
here I propose to use local param {{v}} and param deference {{v=$param}} thus 
every document field implicitly introduces parameter for subquery 
$\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
q=child_id:, presumably we can drop "row." in the middle 
(reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
* \[subquery\], or \[query\], or ? 

Caveat: it should be a way slow; it handles only search result page, not entire 
result set. 


> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=name_s:john=*,depts:[subquery fromIndex=departments]&
> depts.q={!term f=dept_id_s 
> 

[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267503#comment-15267503
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit d7752408dbf94fa3fd1391bf1c37efa3da27eabb in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d775240 ]

LUCENE-7241: Fix intersection bounding so we don't get spurious non-matching 
crossings.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-05-02 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: SOLR-8208.patch

I think it's almost ready. I'll post the final syntax in the description above. 
Note fore reviewers. It introduces thread pool executor, but use it for 
sequential invocations for a while.  Change in MLT is just line move no impact 
at all.
The last test I would like to add is just demonstrate how \[subquery] can be 
used instead of \[child].
My plan is to commit it next week. Concerns? 

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9055) Make collection backup/restore extensible

2016-05-02 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9055:
---
Summary: Make collection backup/restore extensible  (was: Cleanup 
backup/restore implementation)

> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+116) - Build # 16643 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16643/
Java: 32bit/jdk-9-ea+116 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.lucene.search.TestPointQueries.testRandomLongsTiny

Error Message:
Captured an uncaught exception in thread: Thread[id=489, name=T0, 
state=RUNNABLE, group=TGRP-TestPointQueries]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=489, name=T0, state=RUNNABLE, 
group=TGRP-TestPointQueries]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)




Build Log:
[...truncated 658 lines...]
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]   2>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1._run(TestPointQueries.java:554)
   [junit4]   2>at 
org.apache.lucene.search.TestPointQueries$1.run(TestPointQueries.java:503)
   [junit4]   2> 
   [junit4]   2> May 03, 2016 4:32:50 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([FFA825CE713FFF2F]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
   [junit4]   2>at 

Re: [JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 557 - Still Failing!

2016-05-02 Thread David Smiley
Probably related to LUCENE-7262

On Mon, May 2, 2016 at 4:23 PM Policeman Jenkins Server 
wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/557/
> Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseSerialGC
>
> 1 tests failed.
> FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField
>
> Error Message:
>
>
> Stack Trace:
> java.lang.NullPointerException
> at
> __randomizedtesting.SeedInfo.seed([F87E22DEFDCE0F79:DE699A28C09F561B]:0)
> at
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
> at
> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
> at
> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
> at
> org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
> at
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
> at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
> at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
> at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
> at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
> at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
> at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
> at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
> at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
> at
> org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 107 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/107/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.queries.TermsQueryTest.testRamBytesUsed

Error Message:
expected:<28088.0> but was:<26680.0>

Stack Trace:
java.lang.AssertionError: expected:<28088.0> but was:<26680.0>
at 
__randomizedtesting.SeedInfo.seed([703B104F6C19730:F5A0A3443CBE8866]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.queries.TermsQueryTest.testRamBytesUsed(TermsQueryTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 7692 lines...]
   [junit4] Suite: org.apache.lucene.queries.TermsQueryTest
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TermsQueryTest 
-Dtests.method=testRamBytesUsed -Dtests.seed=703B104F6C19730 -Dtests.slow=true 
-Dtests.locale=tr -Dtests.timezone=Canada/Newfoundland -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.03s J0 | TermsQueryTest.testRamBytesUsed <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<28088.0> but 
was:<26680.0>
   [junit4]

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 557 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/557/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([F87E22DEFDCE0F79:DE699A28C09F561B]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267426#comment-15267426
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216351404
  
BTW, here's an implementation of waitForState() that does the work on the 
calling thread.  This passes your tests:

```
  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
  throws InterruptedException, TimeoutException {
long stop = System.nanoTime() + unit.toNanos(wait);

if (predicate.matches(this.liveNodes, 
clusterState.getCollectionOrNull(collection))) {
  return;
}

LinkedBlockingQueue> queue = new 
LinkedBlockingQueue<>();
CollectionStateWatcher watcher = new CollectionStateWatcher() {
  @Override
  public void onStateChanged(Set liveNodes, DocCollection 
collectionState) {
queue.add(new Pair<>(liveNodes, collectionState));
registerCollectionStateWatcher(collection, this);
  }
};

registerCollectionStateWatcher(collection, watcher);
try {
  while (true) {
Pair pair = queue.poll(stop - 
System.nanoTime(), TimeUnit.NANOSECONDS);
if (pair == null) {
  throw new TimeoutException();
}
if (predicate.matches(pair.getKey(), pair.getValue())) {
  return;
}
  }
} finally {
  removeCollectionStateWatcher(collection, watcher);
}
  }
```

One thing I noticed in writing this is that it's uncertain whether you'll 
miss any states or not.  I kind of like the idea that you could have your 
watcher return true or false to decide whether to keep watching, as it would 
ensure we could get all updates without missing any.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-05-02 Thread dragonsinth
Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216351404
  
BTW, here's an implementation of waitForState() that does the work on the 
calling thread.  This passes your tests:

```
  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
  throws InterruptedException, TimeoutException {
long stop = System.nanoTime() + unit.toNanos(wait);

if (predicate.matches(this.liveNodes, 
clusterState.getCollectionOrNull(collection))) {
  return;
}

LinkedBlockingQueue> queue = new 
LinkedBlockingQueue<>();
CollectionStateWatcher watcher = new CollectionStateWatcher() {
  @Override
  public void onStateChanged(Set liveNodes, DocCollection 
collectionState) {
queue.add(new Pair<>(liveNodes, collectionState));
registerCollectionStateWatcher(collection, this);
  }
};

registerCollectionStateWatcher(collection, watcher);
try {
  while (true) {
Pair pair = queue.poll(stop - 
System.nanoTime(), TimeUnit.NANOSECONDS);
if (pair == null) {
  throw new TimeoutException();
}
if (predicate.matches(pair.getKey(), pair.getValue())) {
  return;
}
  }
} finally {
  removeCollectionStateWatcher(collection, watcher);
}
  }
```

One thing I noticed in writing this is that it's uncertain whether you'll 
miss any states or not.  I kind of like the idea that you could have your 
watcher return true or false to decide whether to keep watching, as it would 
ensure we could get all updates without missing any.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 555 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/555/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.OverriddenZkACLAndCredentialsProvidersTest.testReadonlyCredentialsSolrZkClientFactoryUsingCompletelyNewProviders

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:44863 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:44863 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([D2BF6A450C04C9F2:304322BCEE2CF1CE]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
at 
org.apache.solr.cloud.OverriddenZkACLAndCredentialsProvidersTest$SolrZkClientFactoryUsingCompletelyNewProviders$1.(OverriddenZkACLAndCredentialsProvidersTest.java:220)
at 
org.apache.solr.cloud.OverriddenZkACLAndCredentialsProvidersTest$SolrZkClientFactoryUsingCompletelyNewProviders.getSolrZkClient(OverriddenZkACLAndCredentialsProvidersTest.java:220)
at 
org.apache.solr.cloud.OverriddenZkACLAndCredentialsProvidersTest.setUp(OverriddenZkACLAndCredentialsProvidersTest.java:83)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9056) Add ZkConnectionListener interface

2016-05-02 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267398#comment-15267398
 ] 

Alan Woodward commented on SOLR-9056:
-

Right, one major internal ZK refactoring at a time...

> Add ZkConnectionListener interface
> --
>
> Key: SOLR-9056
> URL: https://issues.apache.org/jira/browse/SOLR-9056
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: master, 6.1
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9056.patch
>
>
> Zk connection management is currently split among a few classes in 
> not-very-helpful ways.  There's SolrZkClient, which manages general 
> interaction with zookeeper; ZkClientConnectionStrategy, which is a sort-of 
> connection factory, but one that's heavily intertwined with SolrZkClient; and 
> ConnectionManager, which doesn't actually manage connections at all, but 
> instead is a ZK watcher that calls back into SolrZkClient and 
> ZkClientConnectionStrategy.
> We also have a number of classes that need to be notified about ZK session 
> changes - ZkStateReader sets up a bunch of watches for cluster state updates, 
> Overseer and ZkController use ephemeral nodes for elections and service 
> registry, CoreContainer needs to register cores and deal with recoveries, and 
> so on.  At the moment, these are mostly handled via ZkController, which 
> therefore needs to know how about the internals of all these different 
> classes.  There are a few other places where this co-ordination is 
> duplicated, though, for example in CloudSolrClient.  And, as is always the 
> case with duplicated code, things are slightly different in each location.
> I'd like to try and rationalize this, by refactoring the connection 
> management and adding a ZkConnectionListener interface.  Any class that needs 
> to be notified when a zk session has expired or a new session has been 
> established can register itself with the SolrZkClient.  And we can remove a 
> whole bunch of abstraction leakage out of ZkController, and back into the 
> classes that actually need to deal with session changes.  Plus, it makes 
> things a lot easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-05-02 Thread romseygeek
Github user romseygeek commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216347644
  
Feedback is good :-)

I'll pull CSW back out and make it public again.  I think keeping it 
separate from the Predicate is still a useful distinction though.  I'll try 
adding in an executor as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267396#comment-15267396
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user romseygeek commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216347644
  
Feedback is good :-)

I'll pull CSW back out and make it public again.  I think keeping it 
separate from the Predicate is still a useful distinction though.  I'll try 
adding in an executor as well.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9056) Add ZkConnectionListener interface

2016-05-02 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267392#comment-15267392
 ] 

Scott Blum commented on SOLR-9056:
--

Interested, but no bandwidth until we finish the other one. :)

> Add ZkConnectionListener interface
> --
>
> Key: SOLR-9056
> URL: https://issues.apache.org/jira/browse/SOLR-9056
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: master, 6.1
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9056.patch
>
>
> Zk connection management is currently split among a few classes in 
> not-very-helpful ways.  There's SolrZkClient, which manages general 
> interaction with zookeeper; ZkClientConnectionStrategy, which is a sort-of 
> connection factory, but one that's heavily intertwined with SolrZkClient; and 
> ConnectionManager, which doesn't actually manage connections at all, but 
> instead is a ZK watcher that calls back into SolrZkClient and 
> ZkClientConnectionStrategy.
> We also have a number of classes that need to be notified about ZK session 
> changes - ZkStateReader sets up a bunch of watches for cluster state updates, 
> Overseer and ZkController use ephemeral nodes for elections and service 
> registry, CoreContainer needs to register cores and deal with recoveries, and 
> so on.  At the moment, these are mostly handled via ZkController, which 
> therefore needs to know how about the internals of all these different 
> classes.  There are a few other places where this co-ordination is 
> duplicated, though, for example in CloudSolrClient.  And, as is always the 
> case with duplicated code, things are slightly different in each location.
> I'd like to try and rationalize this, by refactoring the connection 
> management and adding a ZkConnectionListener interface.  Any class that needs 
> to be notified when a zk session has expired or a new session has been 
> established can register itself with the SolrZkClient.  And we can remove a 
> whole bunch of abstraction leakage out of ZkController, and back into the 
> classes that actually need to deal with session changes.  Plus, it makes 
> things a lot easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 248 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/248/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.core.TestNRTOpen.testSharedCores

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([E9127C99FBAB5A5A:F1136D6735D7755]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.core.TestNRTOpen.testSharedCores(TestNRTOpen.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:35635/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown 

[jira] [Updated] (SOLR-9056) Add ZkConnectionListener interface

2016-05-02 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9056:

Attachment: SOLR-9056.patch

Patch.  It's a bit of a biggie, so I can open a pull request if anyone's 
interested in commenting that way.

* removes ConnectionManager and ZkClientConnectionStrategy, and replaces them 
with a ZkConnectionFactory.  This just has a createSolrZookeeper(Watcher 
watcher) method.
* SolrZkClient now exposes a registerConnectionListener() method
* When a listener is registered, if the client is already connected, it will 
call the listener's onConnect() method.
* SolrZkClient has its own internal zk Watcher which it passes to 
ZkConnectionFactory.createSolrZookeeper().  This means that SZK can now manage 
its own connections.
* When a session expires, SolrZkClient calls the onSessionExpiry() method of 
all its registered listeners
* When a session has been re-established, SolrZkClient calls the onConnect() 
method of all its registered listeners
* Network hiccups that don't cause session expiry are handled internally, and 
don't call out to listeners at all.
* ZkController now implements ZkConnectionListener, and registers itself with 
its internal client
* ZkStateReader now implements ZkConnectionListener

There are a whole bunch of other things to do in follow-up (Overseer and 
CoreContainer should be ZkConnectionListeners, migrate the ZkController 
listeners, etc), but this patch is big enough as it is.

> Add ZkConnectionListener interface
> --
>
> Key: SOLR-9056
> URL: https://issues.apache.org/jira/browse/SOLR-9056
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: master, 6.1
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9056.patch
>
>
> Zk connection management is currently split among a few classes in 
> not-very-helpful ways.  There's SolrZkClient, which manages general 
> interaction with zookeeper; ZkClientConnectionStrategy, which is a sort-of 
> connection factory, but one that's heavily intertwined with SolrZkClient; and 
> ConnectionManager, which doesn't actually manage connections at all, but 
> instead is a ZK watcher that calls back into SolrZkClient and 
> ZkClientConnectionStrategy.
> We also have a number of classes that need to be notified about ZK session 
> changes - ZkStateReader sets up a bunch of watches for cluster state updates, 
> Overseer and ZkController use ephemeral nodes for elections and service 
> registry, CoreContainer needs to register cores and deal with recoveries, and 
> so on.  At the moment, these are mostly handled via ZkController, which 
> therefore needs to know how about the internals of all these different 
> classes.  There are a few other places where this co-ordination is 
> duplicated, though, for example in CloudSolrClient.  And, as is always the 
> case with duplicated code, things are slightly different in each location.
> I'd like to try and rationalize this, by refactoring the connection 
> management and adding a ZkConnectionListener interface.  Any class that needs 
> to be notified when a zk session has expired or a new session has been 
> established can register itself with the SolrZkClient.  And we can remove a 
> whole bunch of abstraction leakage out of ZkController, and back into the 
> classes that actually need to deal with session changes.  Plus, it makes 
> things a lot easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267309#comment-15267309
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216333963
  
@romseygeek nice job on the changes so far, and sorry to have so much 
feedback and so many asks.  This is a pretty complicated change so I feel like 
it merits the attention to detail.

I feel like we're at a fork in the road with this patch at the moment 
though, and we need to get more people involved to proceed.  Let me explain.

Even having fixed the "calling watchers while holding locks issue", the one 
thing that makes me most nervous about the current state is that we're still 
potentially executing user-provided predicates on threads that belong to a 
variety of other people-- e.g. the caller of forceUpdateCollection() or even 
the Zk event callback thread.  We could make a tactical fix to the 
implementation of waitForState() by turning that method into a loop and running 
the predicate on the actual thread that called waitForState(), such that the 
onStateChanged() handler doesn't dip into client code.

But honestly, I feel like having privatized CollectionStateWatcher and the 
ability to register / unregister is a missed opportunity.  I can think of uses 
for the feature, like in some cases Overseer operations could watch a 
collection for the duration of an operation to prevent having to re-query ZK.  
To make that solid, we'd need to either introduce an Executor in ZkStateReader 
for publishing events, or else require the watch registration to provide an 
executor, the way Guava's ListenableFuture does.

Thoughts?  I'd also like to hear from others.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-05-02 Thread dragonsinth
Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216333963
  
@romseygeek nice job on the changes so far, and sorry to have so much 
feedback and so many asks.  This is a pretty complicated change so I feel like 
it merits the attention to detail.

I feel like we're at a fork in the road with this patch at the moment 
though, and we need to get more people involved to proceed.  Let me explain.

Even having fixed the "calling watchers while holding locks issue", the one 
thing that makes me most nervous about the current state is that we're still 
potentially executing user-provided predicates on threads that belong to a 
variety of other people-- e.g. the caller of forceUpdateCollection() or even 
the Zk event callback thread.  We could make a tactical fix to the 
implementation of waitForState() by turning that method into a loop and running 
the predicate on the actual thread that called waitForState(), such that the 
onStateChanged() handler doesn't dip into client code.

But honestly, I feel like having privatized CollectionStateWatcher and the 
ability to register / unregister is a missed opportunity.  I can think of uses 
for the feature, like in some cases Overseer operations could watch a 
collection for the duration of an operation to prevent having to re-query ZK.  
To make that solid, we'd need to either introduce an Executor in ZkStateReader 
for publishing events, or else require the watch registration to provide an 
executor, the way Guava's ListenableFuture does.

Thoughts?  I'd also like to hear from others.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9056) Add ZkConnectionListener interface

2016-05-02 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-9056:
---

 Summary: Add ZkConnectionListener interface
 Key: SOLR-9056
 URL: https://issues.apache.org/jira/browse/SOLR-9056
 Project: Solr
  Issue Type: New Feature
Affects Versions: master, 6.1
Reporter: Alan Woodward
Assignee: Alan Woodward


Zk connection management is currently split among a few classes in 
not-very-helpful ways.  There's SolrZkClient, which manages general interaction 
with zookeeper; ZkClientConnectionStrategy, which is a sort-of connection 
factory, but one that's heavily intertwined with SolrZkClient; and 
ConnectionManager, which doesn't actually manage connections at all, but 
instead is a ZK watcher that calls back into SolrZkClient and 
ZkClientConnectionStrategy.

We also have a number of classes that need to be notified about ZK session 
changes - ZkStateReader sets up a bunch of watches for cluster state updates, 
Overseer and ZkController use ephemeral nodes for elections and service 
registry, CoreContainer needs to register cores and deal with recoveries, and 
so on.  At the moment, these are mostly handled via ZkController, which 
therefore needs to know how about the internals of all these different classes. 
 There are a few other places where this co-ordination is duplicated, though, 
for example in CloudSolrClient.  And, as is always the case with duplicated 
code, things are slightly different in each location.

I'd like to try and rationalize this, by refactoring the connection management 
and adding a ZkConnectionListener interface.  Any class that needs to be 
notified when a zk session has expired or a new session has been established 
can register itself with the SolrZkClient.  And we can remove a whole bunch of 
abstraction leakage out of ZkController, and back into the classes that 
actually need to deal with session changes.  Plus, it makes things a lot easier 
to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9055) Cleanup backup/restore implementation

2016-05-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267215#comment-15267215
 ] 

David Smiley commented on SOLR-9055:


Thanks for contributing,  [~hgadre].  I suggest renaming this to: Make 
collection backup/restore extensible

Does this API enable the possibility of a hard-link based copy (applicable for 
both backup & restore).  It doesn't seem so but I'm unsure?

I have a general question about HDFS; I have no real experience with it: I 
wonder if Java's NIO file abstractions could be used so we don't have to have 
separate code?  If so it would be wonderful -- simpler and less code to 
maintain.  See https://github.com/damiencarol/jsr203-hadoop   What do you think?

Nitpick: In your editor, if it has this feature (IntelliJ does), configure it 
to strip trailing whitespace _only on Modified Lines_.  IntelliJ: 
Editor/General/Other "Strip trailing spaces on Save".

Before committing to this API, it would be good to have it implement something 
useful (HDFS or whatever), otherwise we very well may miss problems with the 
API -- we probably will.  I'm not saying this issue needs to implement HDFS, 
but at least the proposed patch might have an implementation specific part in 
some separate files that wouldn't be committed with this issue.  I suppose this 
isn't strictly required but it would help.

> Cleanup backup/restore implementation
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16642 - Still Failing!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16642/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'params/c' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ "a":"A 
val", "b":"B val", "wt":"json", "useParams":""},   "context":{ 
"webapp":"/_ye/wk", "path":"/dump1", "httpMethod":"GET"}},  from 
server:  http://127.0.0.1:43609/_ye/wk/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'params/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"a":"A val",
"b":"B val",
"wt":"json",
"useParams":""},
  "context":{
"webapp":"/_ye/wk",
"path":"/dump1",
"httpMethod":"GET"}},  from server:  
http://127.0.0.1:43609/_ye/wk/collection1
at 
__randomizedtesting.SeedInfo.seed([A1E02935D39998CF:29B416EF7D65F537]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:172)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-4912) Configurable numRecordsToKeep for sync during recovery

2016-05-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-4912.
-
Resolution: Duplicate

> Configurable numRecordsToKeep for sync during recovery
> --
>
> Key: SOLR-4912
> URL: https://issues.apache.org/jira/browse/SOLR-4912
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Affects Versions: 4.3
>Reporter: Neelesh Shastry
>Priority: Minor
> Attachments: SOLR-4912.patch
>
>
> During a replica reboot, a fullcopy is triggered if the number of updates is 
> more than a constant 100 (UpdateLog.numRecordsToKeep).This feature adds a new 
> property to the updatelog plugin to make it configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7258) Tune DocIdSetBuilder allocation rate

2016-05-02 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267176#comment-15267176
 ] 

Jeff Wartes commented on LUCENE-7258:
-

Ok, yeah, that’s a reasonable thing to assume. We usually think of it in terms 
of cpu work, but filter caches would be an equally great way to mitigate 
allocations. But a cache is really only useful when you’ve got non-uniform 
query distributions, or enough time-locality at your query rate that your rare 
queries haven’t faced a cache eviction yet. 

I’m indexing address-type data. Not uncommon. I think that if my typical 
geospatial search were based on some hyper-local phone location, we’d be done 
talking, since a filter cache would be useless.  

So maybe we should assume I’m not doing that.

Let’s assume I can get away with something coarse. Let’s assume I can convert 
all location based queries to the center point of a city. Let’s further assume 
that I only care about one radius per city. Finally, let’s assume I’m only 
searching in the US. There are some 40,000 cities in the US, so those 
assumptions yield 40,000 possible queries. That’s not too bad. 

With a 100M-doc core, I think that’s about 12.5Mb per filter cache entry. It 
could be less, I think, particularly with the changes in SOLR-8922, but since 
we’re only going with coarse queries, it’s reasonable to assume there’s going 
to be a lot of hits. 
I don’t need every city in the cache, of course, so maybe… 5%? That’s only some 
25G of heap. 
Doable, especially since it saves allocation size and you could probably trade 
in more of the eden space. (Although this would make warmup more of a pain) I’d 
probably have to cross the CompressedOops boundary at 32G of heap to do that 
too though, so add another 16G to get back to baseline.

Fortunately, the top 5% of cities probably maps to more than 5% of queries. 
More populated cities are also more likely targets for searching in most query 
corpuses. So assuming it’s the biggest 5% that are in the cache, maybe we can 
assume a 15% hit rate? 20%?

Ok, so now I’ve spent something like 41G of heap, and I’ve reduced allocations 
by 20%. Is this pretty good?

I suppose it’s worth noting that this also assumes a perfect cache eviction 
policy, (I’m pretty interested in SOLR-8241) and that there’s no other filter 
cache pressure. (At the least, I’m using facets - SOLR-8171)


> Tune DocIdSetBuilder allocation rate
> 
>
> Key: LUCENE-7258
> URL: https://issues.apache.org/jira/browse/LUCENE-7258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Jeff Wartes
> Attachments: 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> allocation_plot.jpg
>
>
> LUCENE-7211 converted IntersectsPrefixTreeQuery to use DocIdSetBuilder, but 
> didn't actually reduce garbage generation for my Solr index.
> Since something like 40% of my garbage (by space) is now attributed to 
> DocIdSetBuilder.growBuffer, I charted a few different allocation strategies 
> to see if I could tune things more. 
> See here: http://i.imgur.com/7sXLAYv.jpg 
> The jump-then-flatline at the right would be where DocIdSetBuilder gives up 
> and allocates a FixedBitSet for a 100M-doc index. (The 1M-doc index 
> curve/cutoff looked similar)
> Perhaps unsurprisingly, the 1/8th growth factor in ArrayUtil.oversize is 
> terrible from an allocation standpoint if you're doing a lot of expansions, 
> and is especially terrible when used to build a short-lived data structure 
> like this one.
> By the time it goes with the FBS, it's allocated around twice as much memory 
> for the buffer as it would have needed for just the FBS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267156#comment-15267156
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user romseygeek commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216315281
  
OK, latest push moves all notifications out of synchronized blocks.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-05-02 Thread romseygeek
Github user romseygeek commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-216315281
  
OK, latest push moves all notifications out of synchronized blocks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267144#comment-15267144
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user romseygeek commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61777874
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

Yeah, I think this can be done in a follow-up issue, if need be?


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, 

[GitHub] lucene-solr pull request: SOLR-8323

2016-05-02 Thread romseygeek
Github user romseygeek commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61777874
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

Yeah, I think this can be done in a follow-up issue, if need be?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267119#comment-15267119
 ] 

Joel Bernstein commented on SOLR-8996:
--

No, problem! This is going to be a fun release!

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-05-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8925.
--
Resolution: Implemented

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
>  gatherNodes(friends,
>  gatherNodes(friends,
>  search(articles, q=“body:(queryA)”, fl=“author”),
>  walk ="author->user”,
>  gather="friend"),
>  walk=“friend->user”,
>  gather="friend",
>  scatter=“branches, leaves”)
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> *Aggregations* are also supported during the traversal. This can be useful 
> for making recommendations based on co-occurance counts: Sample syntax:
> {code}
> top(
>   gatherNodes(baskets,
>   search(baskets, q=“prodid:X”, fl=“basketid”, rows=“500”, 
> sort=“random_7897987 asc”),
>   walk =“basketid->basketid”,
>   gather=“prodid”,
>   fl=“prodid, price”,
>   count(*),
>   avg(price)),
>   n=4,
>   sort=“count(*) desc, avg(price) asc”)
> {code}
> In the expression above, the inner search() function searches the basket 
> collection for 500 random basketId's that have the prodid X.
> gatherNodes then traverses the basket collection and gathers all the prodid's 
> for the selected basketIds.
> It also aggregates the counts and average price for each productid collected. 
> The count reflects the co-occurance count for each prodid gathered and prodid 
> X. The outer *top* expression selects the top 4 prodid's emitted from 
> gatherNodes, based the co-occurance count and avg price.
> Like all streaming expressions the gatherNodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends,
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend->user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends,
>   search(articles, 
> q=“body:(queryB)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend->user”,
>   

[jira] [Resolved] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-05-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9027.
--
Resolution: Implemented

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8996) Add Random Streaming Expression

2016-05-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8996.

Resolution: Implemented

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 556 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/556/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([34C88D55E9D32DEA:12DF35A3D4827488]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2015)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:852)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9055) Cleanup backup/restore implementation

2016-05-02 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267004#comment-15267004
 ] 

Hrishikesh Gadre commented on SOLR-9055:


The repository interface defined as part of this patch could be used while 
defining APIs in SOLR-7374

> Cleanup backup/restore implementation
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 111 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/111/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:49182/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:49182/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([4F6A02F8A4DD5559:C73E3D220A2138A1]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9054) The new GUI is using hardcoded paths

2016-05-02 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266966#comment-15266966
 ] 

Upayavira commented on SOLR-9054:
-

Yes, it is a duplicate. 

When I implemented the new UI, I deliberately did NOT support diverting away 
from /solr. The location where Solr is mounted is not to be considered a 
user-configurable value - this, really, is a part of the move away from Solr 
being a War distributable to being an application in its own right.

Why do you want to replace the /solr portion of the URL?

> The new GUI is using hardcoded paths
> 
>
> Key: SOLR-9054
> URL: https://issues.apache.org/jira/browse/SOLR-9054
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 6.0
>Reporter: Valerio Di Cagno
>
> If Apache Solr 6.0 is started without using the default context root "/solr"
> every admin service will not work properly and is not possible to use the 
> provided links to go back to the old GUI.
> In the javascript files sometimes the parameter config.solr_path is ignored
> or replaced with the value /solr returning 404 on access.
> Affected files: 
> solr-webapp/webapp/js/services.js
> Suggested solution:
> Complete the integration with /js/scripts/app.js



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9053) Upgrade fileupload-commons to 1.3.1

2016-05-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266950#comment-15266950
 ] 

Mike Drob commented on SOLR-9053:
-

I get two test failures with this patch, but they are reproducible before 
applying the patch as well.

> Upgrade fileupload-commons to 1.3.1
> ---
>
> Key: SOLR-9053
> URL: https://issues.apache.org/jira/browse/SOLR-9053
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, trunk
>Reporter: Jeff Field
>  Labels: commons-file-upload
> Attachments: SOLR-9053.patch
>
>
> The project appears to pull in FileUpload 1.2.1. According to CVE-2014-0050:
> "MultipartStream.java in Apache Commons FileUpload before 1.3.1, as used in 
> Apache Tomcat, JBoss Web, and other products, allows remote attackers to 
> cause a denial of service (infinite loop and CPU consumption) via a crafted 
> Content-Type header that bypasses a loop's intended exit conditions."
> [Source|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0050]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Noble Paul
I shall dig into this

On Mon, May 2, 2016 at 9:17 PM, Yonik Seeley  wrote:

> +1
>
> -Yonik
>
>
> On Sat, Apr 30, 2016 at 5:25 PM, Anshum Gupta 
> wrote:
> > Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
> >
> > Artifacts:
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
> >
> > Smoke tester:
> >
> >   python3 -u dev-tools/scripts/smokeTestRelease.py
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
> >
> >
> > Here's my +1:
> >
> > SUCCESS! [0:26:44.452268]
> >
> > --
> > Anshum Gupta
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
-
Noble Paul


[jira] [Commented] (SOLR-8988) Improve facet.method=fcs performance in SolrCloud

2016-05-02 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266916#comment-15266916
 ] 

Keith Laban commented on SOLR-8988:
---

[~hossman] how does the updated patch look?

> Improve facet.method=fcs performance in SolrCloud
> -
>
> Key: SOLR-8988
> URL: https://issues.apache.org/jira/browse/SOLR-8988
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8988.patch, SOLR-8988.patch, Screen Shot 2016-04-25 
> at 2.54.47 PM.png, Screen Shot 2016-04-25 at 2.55.00 PM.png
>
>
> This relates to SOLR-8559 -- which improves the algorithm used by fcs 
> faceting when {{facet.mincount=1}}
> This patch allows {{facet.mincount}} to be sent as 1 for distributed queries. 
> As far as I can tell there is no reason to set {{facet.mincount=0}} for 
> refinement purposes . After trying to make sense of all the refinement logic, 
> I cant see how the difference between _no value_ and _value=0_ would have a 
> negative effect.
> *Test perf:*
> - ~15million unique terms
> - query matches ~3million documents
> *Params:*
> {code}
> facet.mincount=1
> facet.limit=500
> facet.method=fcs
> facet.sort=count
> {code}
> *Average Time Per Request:*
> - Before patch:  ~20seconds
> - After patch: <1 second
> *Note*: all tests pass and in my test, the output was identical before and 
> after patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8986) Windows solr.cmd seems to require -p 8983

2016-05-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266902#comment-15266902
 ] 

Joel Bernstein commented on SOLR-8986:
--

The CHANGES.txt commits were actually for SOLR-8996. Sorry for noise on this 
ticket.

> Windows solr.cmd seems to require -p 8983
> -
>
> Key: SOLR-8986
> URL: https://issues.apache.org/jira/browse/SOLR-8986
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
> Attachments: start-solr-on-windows.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7268) Remove ArrayUtil.timSort?

2016-05-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266900#comment-15266900
 ] 

Adrien Grand commented on LUCENE-7268:
--

That is right for our TimSort too, my bad. I did the test with a max temporary 
storage of array.length above, but it would work the same with a mamimum 
temporary storage of about array.length/2, and it would still not merge in 
place.

> Remove ArrayUtil.timSort?
> -
>
> Key: LUCENE-7268
> URL: https://issues.apache.org/jira/browse/LUCENE-7268
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7268_mods.patch
>
>
> Is there some workload where our timSort is better than the JDK one? Should 
> we just remove ours if its slower?
> Not that its a great test, but i switched Polygon2D edge sorting (just the 
> one where it says "sort the edges then build a balanced tree from them") from 
> Arrays.sort to ArrayUtil.timSort and was surprised when performance was much 
> slower for an enormous polygon 
> (http://people.apache.org/~mikemccand/geobench/cleveland.poly.txt.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266889#comment-15266889
 ] 

ASF subversion and git services commented on SOLR-8925:
---

Commit df72df1c58a5884774d003eec2f71c27a4737896 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df72df1 ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt

Conflicts:
solr/CHANGES.txt


> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
>  gatherNodes(friends,
>  gatherNodes(friends,
>  search(articles, q=“body:(queryA)”, fl=“author”),
>  walk ="author->user”,
>  gather="friend"),
>  walk=“friend->user”,
>  gather="friend",
>  scatter=“branches, leaves”)
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> *Aggregations* are also supported during the traversal. This can be useful 
> for making recommendations based on co-occurance counts: Sample syntax:
> {code}
> top(
>   gatherNodes(baskets,
>   search(baskets, q=“prodid:X”, fl=“basketid”, rows=“500”, 
> sort=“random_7897987 asc”),
>   walk =“basketid->basketid”,
>   gather=“prodid”,
>   fl=“prodid, price”,
>   count(*),
>   avg(price)),
>   n=4,
>   sort=“count(*) desc, avg(price) asc”)
> {code}
> In the expression above, the inner search() function searches the basket 
> collection for 500 random basketId's that have the prodid X.
> gatherNodes then traverses the basket collection and gathers all the prodid's 
> for the selected basketIds.
> It also aggregates the counts and average price for each productid collected. 
> The count reflects the co-occurance count for each prodid gathered and prodid 
> X. The outer *top* expression selects the top 4 prodid's emitted from 
> gatherNodes, based the co-occurance count and avg price.
> Like all streaming expressions the gatherNodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends,
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend->user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends,
>

[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266890#comment-15266890
 ] 

ASF subversion and git services commented on SOLR-9027:
---

Commit df72df1c58a5884774d003eec2f71c27a4737896 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df72df1 ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt

Conflicts:
solr/CHANGES.txt


> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8986) Windows solr.cmd seems to require -p 8983

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266888#comment-15266888
 ] 

ASF subversion and git services commented on SOLR-8986:
---

Commit df72df1c58a5884774d003eec2f71c27a4737896 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df72df1 ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt

Conflicts:
solr/CHANGES.txt


> Windows solr.cmd seems to require -p 8983
> -
>
> Key: SOLR-8986
> URL: https://issues.apache.org/jira/browse/SOLR-8986
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
> Attachments: start-solr-on-windows.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9035) New cwiki page: IndexUpgrader

2016-05-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-9035:
---

Assignee: Cassandra Targett

> New cwiki page: IndexUpgrader
> -
>
> Key: SOLR-9035
> URL: https://issues.apache.org/jira/browse/SOLR-9035
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 6.0
>Reporter: Bram Van Dam
>Assignee: Cassandra Targett
>  Labels: documentation
> Attachments: indexupgrader.html
>
>
> The cwiki does not contain any IndexUpgrader documentation, but it is 
> mentioned in passing in the "Major Changes"-pages.
> I'm attaching a file containing some basic usage instructions and adminitions 
> found in the IndexUpgrader javadoc. 
> Once the page is created, it would ideally be linked to from the Major 
> Changes page as well as the Upgrading Solr page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9035) New cwiki page: IndexUpgrader

2016-05-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266864#comment-15266864
 ] 

Cassandra Targett commented on SOLR-9035:
-

I made a page from the HTML file at 
https://cwiki.apache.org/confluence/display/solr/IndexUpgrader+Tool.

For now, it's in the INTERNAL section to be moved into the published part of 
the Guide when we settle on where it should go. I'll take a look at that 
question at a later point in time, for now I just wanted to get the page 
created.

> New cwiki page: IndexUpgrader
> -
>
> Key: SOLR-9035
> URL: https://issues.apache.org/jira/browse/SOLR-9035
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 6.0
>Reporter: Bram Van Dam
>  Labels: documentation
> Attachments: indexupgrader.html
>
>
> The cwiki does not contain any IndexUpgrader documentation, but it is 
> mentioned in passing in the "Major Changes"-pages.
> I'm attaching a file containing some basic usage instructions and adminitions 
> found in the IndexUpgrader javadoc. 
> Once the page is created, it would ideally be linked to from the Major 
> Changes page as well as the Upgrading Solr page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 10 - Still Failing

2016-05-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/10/

No tests ran.

Build Log:
[...truncated 29038 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/build.xml:529: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/solr/build.xml:520:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/solr/build.xml:607:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/common-build.xml:2606:
 Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/solr/build/docs/changes/jiraVersionList.json

Total time: 6 minutes 46 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Yonik Seeley
+1

-Yonik


On Sat, Apr 30, 2016 at 5:25 PM, Anshum Gupta  wrote:
> Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
>
> Artifacts:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
> Smoke tester:
>
>   python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
>
> Here's my +1:
>
> SUCCESS! [0:26:44.452268]
>
> --
> Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9053) Upgrade fileupload-commons to 1.3.1

2016-05-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9053:

Attachment: SOLR-9053.patch

Patch to update the version to 1.3.1

> Upgrade fileupload-commons to 1.3.1
> ---
>
> Key: SOLR-9053
> URL: https://issues.apache.org/jira/browse/SOLR-9053
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, trunk
>Reporter: Jeff Field
>  Labels: commons-file-upload
> Attachments: SOLR-9053.patch
>
>
> The project appears to pull in FileUpload 1.2.1. According to CVE-2014-0050:
> "MultipartStream.java in Apache Commons FileUpload before 1.3.1, as used in 
> Apache Tomcat, JBoss Web, and other products, allows remote attackers to 
> cause a denial of service (infinite loop and CPU consumption) via a crafted 
> Content-Type header that bypasses a loop's intended exit conditions."
> [Source|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0050]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Steve Rowe
+1

Docs, changes and javadocs look good.

The smoke tester passed for me (with java8): SUCCESS! [0:45:46.037606]

--
Steve
www.lucidworks.com

> On Apr 30, 2016, at 5:25 PM, Anshum Gupta  wrote:
> 
> Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
> 
> Artifacts:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
> 
> Smoke tester:
> 
>   python3 -u dev-tools/scripts/smokeTestRelease.py 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
> 
> 
> Here's my +1:
> 
> SUCCESS! [0:26:44.452268]
> 
> -- 
> Anshum Gupta


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Timothy Potter
+1 SUCCESS! [0:51:00.367685]

On Sat, Apr 30, 2016 at 3:25 PM, Anshum Gupta  wrote:
> Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
>
> Artifacts:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
> Smoke tester:
>
>   python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
>
> Here's my +1:
>
> SUCCESS! [0:26:44.452268]
>
> --
> Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Anshum Gupta
I can't reproduce that failure even with the seed. I don't think it's a
blocker and we should be good to go.

@Noble: Considering you have a better grip on this test, can you confirm ?

I think we should track this in a JIRA. Perhaps just an epic with sub-tasks
about failing tests is what we need right now to get a grip on the tests
situation.

On Mon, May 2, 2016 at 8:07 AM, Shai Erera  wrote:

> When I ran the smoke tester for the first time, I encountered this test
> failure:
>
> [junit4] Suite: org.apache.solr.security.TestPKIAuthenticationPlugin
> [junit4] 2> Creating dataDir:
> /tmp/smoke_lucene_5.5.1_c08f17bca0d9cbf516874d13d221ab100e5b7d58_3/unpack/solr-5.5.1/solr/build/solr-core/test/J3/temp/solr.security.TestPKIAuthenticationPlugin_4643E7DFA3C28AD5-001/init-core-data-001
> [junit4] 2> 48028 INFO
> (SUITE-TestPKIAuthenticationPlugin-seed#[4643E7DFA3C28AD5]-worker) [ ]
> o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
> [junit4] 2> 48031 INFO
> (TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
> o.a.s.SolrTestCaseJ4 ###Starting test
> [junit4] 2> 48323 ERROR
> (TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
> o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
> [junit4] 2> 48377 ERROR
> (TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
> o.a.s.s.PKIAuthenticationPlugin Invalid key
> [junit4] 2> 48377 INFO
> (TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
> o.a.s.SolrTestCaseJ4 ###Ending test
> [junit4] 2> NOTE: reproduce with: ant test
> -Dtestcase=TestPKIAuthenticationPlugin -Dtests.method=test
> -Dtests.seed=4643E7DFA3C28AD5 -Dtests.locale=ja-JP
> -Dtests.timezone=Australia/Lindeman -Dtests.asserts=true
> -Dtests.file.encoding=US-ASCII
> [junit4] ERROR 0.35s J3 | TestPKIAuthenticationPlugin.test <<<
> [junit4] > Throwable #1: java.lang.NullPointerException
> [junit4] > at
> __randomizedtesting.SeedInfo.seed([4643E7DFA3C28AD5:CE17D8050D3EE72D]:0)
> [junit4] > at
> org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:156)
> [junit4] > at java.lang.Thread.run(Thread.java:745)
> [junit4] 2> 48379 INFO
> (SUITE-TestPKIAuthenticationPlugin-seed#[4643E7DFA3C28AD5]-worker) [ ]
> o.a.s.SolrTestCaseJ4 ###deleteCore
> [junit4] 2> NOTE: leaving temporary files on disk at:
> /tmp/smoke_lucene_5.5.1_c08f17bca0d9cbf516874d13d221ab100e5b7d58_3/unpack/solr-5.5.1/solr/build/solr-core/test/J3/temp/solr.security.TestPKIAuthenticationPlugin_4643E7DFA3C28AD5-001
> [junit4] 2> NOTE: test params are: codec=Asserting(Lucene54): {},
> docValues:{}, sim=DefaultSimilarity, locale=ja-JP,
> timezone=Australia/Lindeman
> [junit4] 2> NOTE: Linux 4.2.0-30-generic amd64/Oracle Corporation 1.7.0_80
> (64-bit)/cpus=8,threads=1,free=161219560,total=432537600
> [junit4] 2> NOTE: All tests run in this JVM: [TestAtomicUpdateErrorCases,
> TestDefaultStatsCache, TestFiltering, PluginInfoTest,
> HdfsWriteToMultipleCollectionsTest, DistributedFacetPivotSmallAdvancedTest,
> ConnectionManagerTest, TestJoin, ShardRoutingTest,
> WrapperMergePolicyFactoryTest, IndexSchemaRuntimeFieldTest,
> TestClassNameShortening, SimpleCollectionCreateDeleteTest,
> TestManagedResource, BigEndianAscendingWordDeserializerTest,
> HdfsRestartWhileUpdatingTest, TestSolrDeletionPolicy1, TestConfigReload,
> TestSolrJ, TestIndexingPerformance, TestInitQParser,
> AlternateDirectoryTest, TestConfigOverlay, TestCSVResponseWriter,
> SpatialRPTFieldTypeTest, SolrIndexSplitterTest, DistributedVersionInfoTest,
> TestSmileRequest, TestPKIAuthenticationPlugin]
>
> Second time it passed. I didn't have time to dig into the failure, so I
> can't tell if it should hold off the release. What do you think?
>
> Shai
>
> On Sun, May 1, 2016 at 12:26 AM Anshum Gupta 
> wrote:
>
>> Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
>>
>> Artifacts:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>>
>> Smoke tester:
>>
>>   python3 -u dev-tools/scripts/smokeTestRelease.py
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>>
>>
>> Here's my +1:
>>
>> SUCCESS! [0:26:44.452268]
>>
>>
>> --
>> Anshum Gupta
>>
>


-- 
Anshum Gupta


[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266798#comment-15266798
 ] 

ASF subversion and git services commented on SOLR-9027:
---

Commit 62a28dd0c7dc8f41e43d5c37e28c968556b8e9d2 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62a28dd ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt


> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8986) Windows solr.cmd seems to require -p 8983

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266796#comment-15266796
 ] 

ASF subversion and git services commented on SOLR-8986:
---

Commit 62a28dd0c7dc8f41e43d5c37e28c968556b8e9d2 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62a28dd ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt


> Windows solr.cmd seems to require -p 8983
> -
>
> Key: SOLR-8986
> URL: https://issues.apache.org/jira/browse/SOLR-8986
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
> Attachments: start-solr-on-windows.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266797#comment-15266797
 ] 

ASF subversion and git services commented on SOLR-8925:
---

Commit 62a28dd0c7dc8f41e43d5c37e28c968556b8e9d2 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62a28dd ]

SOLR-8986, SOLR-8925, SOLR-9027: Update CHANGES.txt


> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, SOLR-8925.patch, 
> SOLR-8925.patch
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
>  gatherNodes(friends,
>  gatherNodes(friends,
>  search(articles, q=“body:(queryA)”, fl=“author”),
>  walk ="author->user”,
>  gather="friend"),
>  walk=“friend->user”,
>  gather="friend",
>  scatter=“branches, leaves”)
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> *Aggregations* are also supported during the traversal. This can be useful 
> for making recommendations based on co-occurance counts: Sample syntax:
> {code}
> top(
>   gatherNodes(baskets,
>   search(baskets, q=“prodid:X”, fl=“basketid”, rows=“500”, 
> sort=“random_7897987 asc”),
>   walk =“basketid->basketid”,
>   gather=“prodid”,
>   fl=“prodid, price”,
>   count(*),
>   avg(price)),
>   n=4,
>   sort=“count(*) desc, avg(price) asc”)
> {code}
> In the expression above, the inner search() function searches the basket 
> collection for 500 random basketId's that have the prodid X.
> gatherNodes then traverses the basket collection and gathers all the prodid's 
> for the selected basketIds.
> It also aggregates the counts and average price for each productid collected. 
> The count reflects the co-occurance count for each prodid gathered and prodid 
> X. The outer *top* expression selects the top 4 prodid's emitted from 
> gatherNodes, based the co-occurance count and avg price.
> Like all streaming expressions the gatherNodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends,
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend->user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends,
>   search(articles, 

[jira] [Assigned] (SOLR-8744) SplitShard needs finer-grained mutual exclusion

2016-05-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-8744:


Assignee: Noble Paul

> SplitShard needs finer-grained mutual exclusion
> ---
>
> Key: SOLR-8744
> URL: https://issues.apache.org/jira/browse/SOLR-8744
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Noble Paul
>  Labels: sharding, solrcloud
>
> SplitShard creates a mutex over the whole collection, but in practice this is 
> a big scaling problem.  Multiple split shard operations could happen at the 
> time time, as long as different shards are being split.  In practice, those 
> shards often reside on different machines, so there's no I/O bottleneck in 
> those cases, just the mutex in Overseer forcing the operations to be done 
> serially.
> Given that a single split can take many minutes on a large collection, this 
> is a bottleneck at scale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3243 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3243/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([1D3E1D8A502FFD7B:3B29A57C6D7EA419]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:92)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery$IntersectsDifferentiatingVisitor.(IntersectsRPTVerifyQuery.java:166)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$IntersectsDifferentiatingQuery.compute(IntersectsRPTVerifyQuery.java:157)
at 
org.apache.lucene.spatial.composite.IntersectsRPTVerifyQuery$1.scorer(IntersectsRPTVerifyQuery.java:95)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1810)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:292)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2016)
at org.apache.solr.util.TestHarness.query(TestHarness.java:310)
at org.apache.solr.util.TestHarness.query(TestHarness.java:292)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:851)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:820)
at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16641 - Failure!

2016-05-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16641/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestPointQueries

Error Message:
The test or suite printed 13886 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 13886 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([853BD8FA355081B0]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestPointQueries.testRandomBinaryTiny

Error Message:
Captured an uncaught exception in thread: Thread[id=673, name=T4, 
state=RUNNABLE, group=TGRP-TestPointQueries]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=673, name=T4, state=RUNNABLE, 
group=TGRP-TestPointQueries]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([853BD8FA355081B0]:0)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:754)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
at 
org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
at 
org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)




Build Log:
[...truncated 632 lines...]
   [junit4] Suite: org.apache.lucene.search.TestPointQueries
   [junit4] IGNOR/A 0.00s J2 | TestPointQueries.testRandomLongsBig
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> may 02, 2016 12:09:59 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestPointQueries]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([853BD8FA355081B0]:0)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
   [junit4]   2>at 
org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
   [junit4]   2>at 
org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
   [junit4]   2>at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:769)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]  

[jira] [Commented] (LUCENE-7253) Sparse data in doc values and segments merging

2016-05-02 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266764#comment-15266764
 ] 

Shai Erera commented on LUCENE-7253:


To add to the sparsity discussion, when I did the numeric DV updates I already 
wrote (somewhere) that I think if we could cater for sparse DV fields better, 
it might also improve the numeric DV updates case. Today when you update a 
numeric DV field, we rewrite the entire DV for that field in the "stacked" DV. 
This works well if you perform many updates before you flush/commit, but if you 
only update the value of one document, that's costly. If we could write just 
that one update to a stack, we could _collapse_ the stacks at read time.

Of course, that _collapsing_ might slow searches down, so the whole idea of 
writing just the updated values needs to be benchmarked before we actually do 
it, so I'm not proposing that here. Just wanted to give another (potential) use 
case for sparse DV fields.

And FWIW, I do agree with [~yo...@apache.org] and [~dsmiley] about sparse DV 
not being an abuse case, as I'm seeing them very often too. That's of course 
unless you mean something else by abuse case...

> Sparse data in doc values and segments merging 
> ---
>
> Key: LUCENE-7253
> URL: https://issues.apache.org/jira/browse/LUCENE-7253
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Pawel Rog
>  Labels: performance
>
> Doc Values were optimized recently to efficiently store sparse data. 
> Unfortunately there is still big problem with Doc Values merges for sparse 
> fields. When we imagine 1 billion documents index it seems it doesn't matter 
> if all documents have value for this field or there is only 1 document with 
> value. Segment merge time is the same for both cases. In most cases this is 
> not a problem but there are several cases in which one can expect having many 
> fields with sparse doc values.
> I can describe an example. During performance tests of a system with large 
> number of sparse fields I realized that Doc Values merges are a bottleneck. I 
> had hundreds of different numeric fields. Each document contained only small 
> subset of all fields. Average document contains 5-7 different numeric values. 
> As you can see data was very sparse in these fields. It turned out that 
> ingestion process was CPU-bound. Most of CPU time was spent in DocValues 
> related methods (SingletonSortedNumericDocValues#setDocument, 
> DocValuesConsumer$10$1#next, DocValuesConsumer#isSingleValued, 
> DocValuesConsumer$4$1#setNext, ...) - mostly during merging segments.
> Adrien Grand suggested to reduce the number of sparse fields and replace them 
> with smaller number of denser fields. This helped a lot but complicated 
> fields naming. 
> I am not much familiar with Doc Values source code but I have small 
> suggestion how to improve Doc Values merges for sparse fields. I realized 
> that Doc Values producers and consumers use Iterators. Let's take an example 
> of numeric Doc Values. Would it be possible to replace Iterator which 
> "travels" through all documents with Iterator over collection of non empty 
> values? Of course this would require storing object (instead of numeric) 
> which contains value and document ID. Such an iterator could significantly 
> improve merge time of sparse Doc Values fields. IMHO this won't cause big 
> overhead for dense structures but it can be game changer for sparse 
> structures.
> This is what happens in NumericDocValuesWriter on flush
> {code}
> dvConsumer.addNumericField(fieldInfo,
>new Iterable() {
>  @Override
>  public Iterator iterator() {
>return new NumericIterator(maxDoc, values, 
> docsWithField);
>  }
>});
> {code}
> Before this happens during addValue, this loop is executed to fill holes.
> {code}
> // Fill in any holes:
> for (int i = (int)pending.size(); i < docID; ++i) {
>   pending.add(MISSING);
> }
> {code}
> It turns out that variable called pending is used only internally in 
> NumericDocValuesWriter. I know pending is PackedLongValues and it wouldn't be 
> good to change it with different class (some kind of list) because this may 
> break DV performance for dense fields. I hope someone can suggest interesting 
> solutions for this problem :).
> It would be great if discussion about sparse Doc Values merge performance can 
> start here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

Re: [VOTE] Release Lucene/Solr 5.5.1

2016-05-02 Thread Shai Erera
When I ran the smoke tester for the first time, I encountered this test
failure:

[junit4] Suite: org.apache.solr.security.TestPKIAuthenticationPlugin
[junit4] 2> Creating dataDir:
/tmp/smoke_lucene_5.5.1_c08f17bca0d9cbf516874d13d221ab100e5b7d58_3/unpack/solr-5.5.1/solr/build/solr-core/test/J3/temp/solr.security.TestPKIAuthenticationPlugin_4643E7DFA3C28AD5-001/init-core-data-001
[junit4] 2> 48028 INFO
(SUITE-TestPKIAuthenticationPlugin-seed#[4643E7DFA3C28AD5]-worker) [ ]
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
[junit4] 2> 48031 INFO
(TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
o.a.s.SolrTestCaseJ4 ###Starting test
[junit4] 2> 48323 ERROR
(TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
[junit4] 2> 48377 ERROR
(TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
o.a.s.s.PKIAuthenticationPlugin Invalid key
[junit4] 2> 48377 INFO
(TEST-TestPKIAuthenticationPlugin.test-seed#[4643E7DFA3C28AD5]) [ ]
o.a.s.SolrTestCaseJ4 ###Ending test
[junit4] 2> NOTE: reproduce with: ant test
-Dtestcase=TestPKIAuthenticationPlugin -Dtests.method=test
-Dtests.seed=4643E7DFA3C28AD5 -Dtests.locale=ja-JP
-Dtests.timezone=Australia/Lindeman -Dtests.asserts=true
-Dtests.file.encoding=US-ASCII
[junit4] ERROR 0.35s J3 | TestPKIAuthenticationPlugin.test <<<
[junit4] > Throwable #1: java.lang.NullPointerException
[junit4] > at
__randomizedtesting.SeedInfo.seed([4643E7DFA3C28AD5:CE17D8050D3EE72D]:0)
[junit4] > at
org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:156)
[junit4] > at java.lang.Thread.run(Thread.java:745)
[junit4] 2> 48379 INFO
(SUITE-TestPKIAuthenticationPlugin-seed#[4643E7DFA3C28AD5]-worker) [ ]
o.a.s.SolrTestCaseJ4 ###deleteCore
[junit4] 2> NOTE: leaving temporary files on disk at:
/tmp/smoke_lucene_5.5.1_c08f17bca0d9cbf516874d13d221ab100e5b7d58_3/unpack/solr-5.5.1/solr/build/solr-core/test/J3/temp/solr.security.TestPKIAuthenticationPlugin_4643E7DFA3C28AD5-001
[junit4] 2> NOTE: test params are: codec=Asserting(Lucene54): {},
docValues:{}, sim=DefaultSimilarity, locale=ja-JP,
timezone=Australia/Lindeman
[junit4] 2> NOTE: Linux 4.2.0-30-generic amd64/Oracle Corporation 1.7.0_80
(64-bit)/cpus=8,threads=1,free=161219560,total=432537600
[junit4] 2> NOTE: All tests run in this JVM: [TestAtomicUpdateErrorCases,
TestDefaultStatsCache, TestFiltering, PluginInfoTest,
HdfsWriteToMultipleCollectionsTest, DistributedFacetPivotSmallAdvancedTest,
ConnectionManagerTest, TestJoin, ShardRoutingTest,
WrapperMergePolicyFactoryTest, IndexSchemaRuntimeFieldTest,
TestClassNameShortening, SimpleCollectionCreateDeleteTest,
TestManagedResource, BigEndianAscendingWordDeserializerTest,
HdfsRestartWhileUpdatingTest, TestSolrDeletionPolicy1, TestConfigReload,
TestSolrJ, TestIndexingPerformance, TestInitQParser,
AlternateDirectoryTest, TestConfigOverlay, TestCSVResponseWriter,
SpatialRPTFieldTypeTest, SolrIndexSplitterTest, DistributedVersionInfoTest,
TestSmileRequest, TestPKIAuthenticationPlugin]

Second time it passed. I didn't have time to dig into the failure, so I
can't tell if it should hold off the release. What do you think?

Shai

On Sun, May 1, 2016 at 12:26 AM Anshum Gupta  wrote:

> Please vote for the RC1 release candidate for Lucene/Solr 5.5.1.
>
> Artifacts:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
> Smoke tester:
>
>   python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.1-RC1-revc08f17bca0d9cbf516874d13d221ab100e5b7d58
>
>
> Here's my +1:
>
> SUCCESS! [0:26:44.452268]
>
>
> --
> Anshum Gupta
>


[jira] [Commented] (SOLR-9054) The new GUI is using hardcoded paths

2016-05-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266740#comment-15266740
 ] 

Cassandra Targett commented on SOLR-9054:
-

Is this a duplicate of SOLR-9000?

> The new GUI is using hardcoded paths
> 
>
> Key: SOLR-9054
> URL: https://issues.apache.org/jira/browse/SOLR-9054
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 6.0
>Reporter: Valerio Di Cagno
>
> If Apache Solr 6.0 is started without using the default context root "/solr"
> every admin service will not work properly and is not possible to use the 
> provided links to go back to the old GUI.
> In the javascript files sometimes the parameter config.solr_path is ignored
> or replaced with the value /solr returning 404 on access.
> Affected files: 
> solr-webapp/webapp/js/services.js
> Suggested solution:
> Complete the integration with /js/scripts/app.js



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-02 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266725#comment-15266725
 ] 

Dennis Gove commented on SOLR-8996:
---

Looks good to me. Thank you for putting these in.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >