[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3237 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3237/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'null' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":4, "params":{   "x":{ "a":"A val", 
"b":"B val", "_appends_":{"add":"first"}, 
"_invariants_":{"fixed":"f"}, "":{"v":1}},   "y":{ "p":"P 
val", "q":"Q val", "":{"v":2},  from server:  
http://127.0.0.1:54137/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'null' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":4,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"_appends_":{"add":"first"},
"_invariants_":{"fixed":"f"},
"":{"v":1}},
  "y":{
"p":"P val",
"q":"Q val",
"":{"v":2},  from server:  http://127.0.0.1:54137/collection1
at 
__randomizedtesting.SeedInfo.seed([2AF64E9075DE3853:A2A2714ADB2255AB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:259)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 6 - Still Failing

2016-04-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/6/

5 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=19709, name=Thread-10623, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=19709, name=Thread-10623, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:34603/cpg/collection1
at __randomizedtesting.SeedInfo.seed([1B8F3C17723282CC]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:644)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:34603/cpg/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:586)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:642)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 5 more


FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
Could not find collection:delete_data_dir

Stack Trace:
java.lang.AssertionError: Could not find collection:delete_data_dir
at 
__randomizedtesting.SeedInfo.seed([1B8F3C17723282CC:93DB03CDDCCEEF34]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:151)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:130)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:832)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:155)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 

[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263440#comment-15263440
 ] 

David Smiley commented on SOLR-9038:


I _think_ we may be understanding each other again.  We might not want to call 
this a snapshot simply because there are remnants of that naming within 
replication & backup (e.g. "Snapshooter").  Instead I propose naming it closer 
to what it actually is implemented as -- like a "commit lease" or "snapshot 
commit" (the operative word being "commit").  Of course others may want to 
comment; I have no conviction.  For now lets continue with "snapshot commit" as 
it retains both words, and is a decent name, I think.

bq. As you mentioned in your earlier comments, we can use the "commit" workflow 
to create a named snapshot.
bq. Does that make sense?

Yes!

bq. How would the "list snapshots" and "delete snapshot" APIs look like? Do we 
need to provide them just at the core level or at the collection level as well?

I think the data to be listed is fundamentally at the core, so certainly the 
core level.  But a collection level API is needed -- it could simply take the 
distinct union list from asking each leader.  It could list the snapshot 
commits _not_ common to all in a separate list, if there's any utility in that?

bq. Would we allow "destructive" operations (e.g. delete replica/shard) when we 
have one or more snapshots?

I think so.  Not doing so might be a pain, and it's not evident to me it's 
important to worry about it.

bq. It seems to me that the "commit" request will be executed by all replicas 
for a given collection. What should happen when a "commit" request can not be 
processed by a replica (since it may be down) ? We may need to ensure that 
during the replica "recovery" it also fetches the information about commit 
metadata.

Hmm; good point. :-(  That might be a PITA unfortunately.  Perhaps a snapshot 
commit needs to block for all replicas to _not_ be in recovery first?  That 
seems much easier than trying to get replicas in recovery to somehow get 
IndexCommit data which I think is kinda impossible / infeasible.  However, 
another bad situation is when there are already successful snapshot commits, 
and then for whatever reason a replica goes into recovery -- full recovery, and 
thus only grabs the latest commit (which might not even be a snapshot commit.  
So perhaps recovering replicas need to ask to replicate not just the latest 
commit but all snapshot commits as well.  Seems pretty doable.  One would hope 
that the commits would share lots of big segments, but they might not.  I don't 
think this scenario would block an initial release.  Possible but too bad.


> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he 

Re: Lucene/Solr 5.5.1

2016-04-28 Thread Anshum Gupta
I've updated the "Update Version Numbers in the Source Code" section on the
ReleaseToDo page. It'd be good to have some one else also take a look at it.

Here is what I've changed (only bug fix release):
* Only bump up the version on the release branch using addVersion.py
* Don't bump it up on the non-release versions in case of bug fix release.
* As part of the post-release process, use the commit hash from the release
branch version bump up, to increment the version on the non-release
branches.

I thought we could do this for non bug-fix releases too, but I was wrong.
Minor versions need to be bumped up on stable branches (and trunk) because
during the release process for say version 6.1, there might be commits for
6.2 and we'd need stable branches and master, both to support those commits.
We could debate about not needing something like this for major versions
but then I don't think it's worth the pain of different release processes
for each branch but I'm not stuck up with this.


On Thu, Apr 28, 2016 at 5:31 PM, Anshum Gupta 
wrote:

> That's fixed (about to commit the fix from LUCENE-7265) thought.
>
> While discussing the release process, Steve mentioned that we should
> document the failing back-compat index test on the non-release branches due
> to the missing index for the unreleased version.
> On discussing further, he suggested that we instead move the process of
> adding the version to non-release branches as a post-release task. This
> way, we wouldn't have failing tests until the release goes through and the
> back-compat indexes are checked in.
>
> We still would have failing tests for the release branch but there's no
> way around that.
>
> So, I'll change the documentation to move those steps as post-release
> tasks.
>
>
> On Thu, Apr 28, 2016 at 11:40 AM, Anshum Gupta 
> wrote:
>
>> Seems like LUCENE-6938 removed the merge logic that used the change id.
>> Now the merge doesn't happen, and there's no logic that replaces it.
>>
>> I certainly can do with some help on this one.
>>
>> On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
>> wrote:
>>
>>> Just wanted to make sure I wasn't missing something here again. While
>>> trying to update the version on 5x, after having done that on 5.5, using
>>> the addVersion.py script and following the instructions, the command
>>> consistently fails. Here's what I've been trying to do:
>>>
>>> python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
>>>
>>>
>>> Seems like addVersion.py is broken for minor version releases so I'd
>>> need some help with someone who has a better understanding of python than I
>>> do. I observed that 5.5.1 Version gets added to Version.java but also gets
>>> marked as deprecated.
>>>
>>>
>>>
>>> On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
>>> wrote:
>>>
 Too much going on! Thanks Yonik.
 I'll start working on the RC now.

 NOTE: Please don't back port any more issues right now. In case of
 exceptions, please raise them here.

 On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley 
 wrote:

> On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
> wrote:
> > Thanks. I'm waiting for the last back port of SOLR-8865.
>
> It should be already be there... I closed it yesterday.
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


 --
 Anshum Gupta

>>>
>>>
>>>
>>> --
>>> Anshum Gupta
>>>
>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[JENKINS] Lucene-Solr-5.5-Windows (64bit/jdk1.8.0_92) - Build # 60 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/60/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=14, 
name=ReplicationThread-indexAndTaxo, state=RUNNABLE, 
group=TGRP-IndexAndTaxonomyReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=14, name=ReplicationThread-indexAndTaxo, 
state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([BF9025008024E69A:301EC2A092481565]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([BF9025008024E69A]:0)
at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:451)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)




Build Log:
[...truncated 8363 lines...]
   [junit4] Suite: 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest
   [junit4]   2> tra 28, 2016 11:29:31 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[ReplicationThread-indexAndTaxo,5,TGRP-IndexAndTaxonomyReplicationClientTest]
   [junit4]   2> java.lang.AssertionError: handler failed too many times: -1
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([BF9025008024E69A]:0)
   [junit4]   2>at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:451)
   [junit4]   2>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=IndexAndTaxonomyReplicationClientTest 
-Dtests.method=testConsistencyOnExceptions -Dtests.seed=BF9025008024E69A 
-Dtests.slow=true -Dtests.locale=hr-HR 
-Dtests.timezone=America/Argentina/ComodRivadavia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   4.04s J0 | 
IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions <<<
   [junit4]> Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=14, name=ReplicationThread-indexAndTaxo, 
state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest]
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([BF9025008024E69A:301EC2A092481565]:0)
   [junit4]> Caused by: java.lang.AssertionError: handler failed too many 
times: -1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([BF9025008024E69A]:0)
   [junit4]>at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:451)
   [junit4]>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> NOTE: test params are: 
codec=FastCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST,
 chunkSize=1868, maxDocsPerChunk=3, blockSize=1), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST, 
chunkSize=1868, blockSize=1)), 
sim=RandomSimilarity(queryNorm=true,coord=crazy): {}, locale=hr-HR, 
timezone=America/Argentina/ComodRivadavia
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_92 
(64-bit)/cpus=3,threads=1,free=48745720,total=72351744
   [junit4]   2> NOTE: All tests run in this JVM: 
[IndexAndTaxonomyReplicationClientTest]
   [junit4] Completed [6/7 (1!)] on J0 in 7.76s, 5 tests, 1 error <<< FAILURES!

[...truncated 13 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\build.xml:750: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\build.xml:694: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\build.xml:59: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\lucene\build.xml:475: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\lucene\common-build.xml:2273:
 The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\lucene\module-build.xml:58: 
The following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\lucene\common-build.xml:1477:
 The following error occurred while executing this line:

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+115) - Build # 16609 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16609/
Java: 32bit/jdk-9-ea+115 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth

Error Message:
Unexpected exception type, expected SSLHandshakeException

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SSLHandshakeException
at 
__randomizedtesting.SeedInfo.seed([783A6734C5A9C72C:ABBE8EF957649CD0]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2682)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterJettys(TestMiniSolrCloudClusterSSL.java:283)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:185)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:147)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth(TestMiniSolrCloudClusterSSL.java:129)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 144 - Still Failing!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/144/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([E717A1FB0ECEE35A]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([E717A1FB0ECEE35A]:0)




Build Log:
[...truncated 12205 lines...]
   [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest
   [junit4]   2> 383501 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 383505 INFO  (Thread-1011) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 383506 INFO  (Thread-1011) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 383602 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.ZkTestServer start zk server on port:60157
   [junit4]   2> 383602 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 383607 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 383613 INFO  (zkCallback-453-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@2bdc0b19 
name:ZooKeeperConnection Watcher:127.0.0.1:60157 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 383614 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 383614 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 383614 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E717A1FB0ECEE35A]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 383629 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x1545f496fc3, likely client has closed socket
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 383642 INFO  (jetty-launcher-452-thread-2) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 383643 INFO  (jetty-launcher-452-thread-3) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 383644 INFO  (jetty-launcher-452-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@64ea80e{/solr,null,AVAILABLE}
   [junit4]   2> 383644 INFO  (jetty-launcher-452-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3613c55c{/solr,null,AVAILABLE}
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-3) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@3f78fae9{HTTP/1.1,[http/1.1]}{127.0.0.1:60161}
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-3) [] 
o.e.j.s.Server Started @387530ms
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-2) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@78053da7{HTTP/1.1,[http/1.1]}{127.0.0.1:60163}
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-3) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=60161}
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-2) [] 
o.e.j.s.Server Started @387530ms
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=60163}
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-3) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
sun.misc.Launcher$AppClassLoader@73d16e93
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-3) [] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.security.BasicAuthIntegrationTest_E717A1FB0ECEE35A-001\tempDir-001\node3'
   [junit4]   2> 383645 INFO  (jetty-launcher-452-thread-2) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 

[jira] [Updated] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-28 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-9047:
-
Attachment: SOLR-9047.patch

Here's a patch that matches the caps style (I had been copying the style from 
solr.cmd)

> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-04-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262895#comment-15262895
 ] 

Kevin Risden edited comment on SOLR-8593 at 4/29/16 1:23 AM:
-

Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for aggregationMode (facets and map_reduce) and their parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite
* figure out avg(int) problem in tests.
** avg(int) returns int by design. need to figure out if casting is right for 
the tests
* figure out sort asc by default in tests
** this currently doesn't sort properly even though I thought that the right 
approach was sort on _version_.
* handle added dependencies properly -and upgrade to latest Calcite/Avatica-


was (Author: risdenk):
Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for aggregationMode (facets and map_reduce) and their parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite
* figure out avg(int) problem in tests.
** avg(int) returns int by design. need to figure out if casting is right for 
the tests
* figure out sort asc by default in tests
** this currently doesn't sort properly even though I thought that the right 
approach was sort on _version_.
* handle added dependencies properly and upgrade to latest Calcite/Avatica?

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263369#comment-15263369
 ] 

Mark Miller commented on SOLR-9047:
---

Looks okay to me, though caps style doesn't match existing in the batch file.

> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7266) QueryNode#cloneTree produces a new tree where parents are not correctly set

2016-04-28 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz updated LUCENE-7266:

Affects Version/s: 5.4.1

> QueryNode#cloneTree produces a new tree where parents are not correctly set
> ---
>
> Key: LUCENE-7266
> URL: https://issues.apache.org/jira/browse/LUCENE-7266
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 5.4.1
>Reporter: Trejkaz
>
> The following unit test performs a sanity check on the QueryNode tree, 
> checking that each node has the parent set to the same node it was retrieved 
> from. After calling cloneTree, this check fails on the returned node, as the 
> parents in the cloned node still point back into the original tree.
> {code}
> import java.util.Arrays;
> import java.util.List;
> import org.apache.lucene.queryparser.flexible.core.nodes.BooleanQueryNode;
> import org.apache.lucene.queryparser.flexible.core.nodes.FieldQueryNode;
> import org.apache.lucene.queryparser.flexible.core.nodes.QueryNode;
> import org.junit.Test;
> public class TestCloneTree {
> @Test
> public void testCloneTree() throws Exception {
> QueryNode original = new BooleanQueryNode(Arrays.asList(
> new FieldQueryNode(null, "a", 0, 0),
> new FieldQueryNode(null, "b", 0, 0)));
> sanityCheckQueryTree(original);
> QueryNode cloned = original.cloneTree();
> sanityCheckQueryTree(cloned);
> }
> private void sanityCheckQueryTree(QueryNode node) {
> List children = node.getChildren();
> if (children != null) {
> for (QueryNode child : children) {
> // Matching what Lucene is using in QueryNodeImpl itself.
> //noinspection ObjectEquality
> if (child.getParent() != node) {
> throw new IllegalStateException("Sanity check failed for 
> child: " + child + '\n' +
> "  Parent is: " + 
> child.getParent() + '\n' +
> "  But we got to it via: 
> " + node);
> }
> sanityCheckQueryTree(child);
> }
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7266) QueryNode#cloneTree produces a new tree where parents are not correctly set

2016-04-28 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-7266:
---

 Summary: QueryNode#cloneTree produces a new tree where parents are 
not correctly set
 Key: LUCENE-7266
 URL: https://issues.apache.org/jira/browse/LUCENE-7266
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/queryparser
Reporter: Trejkaz


The following unit test performs a sanity check on the QueryNode tree, checking 
that each node has the parent set to the same node it was retrieved from. After 
calling cloneTree, this check fails on the returned node, as the parents in the 
cloned node still point back into the original tree.

{code}
import java.util.Arrays;
import java.util.List;

import org.apache.lucene.queryparser.flexible.core.nodes.BooleanQueryNode;
import org.apache.lucene.queryparser.flexible.core.nodes.FieldQueryNode;
import org.apache.lucene.queryparser.flexible.core.nodes.QueryNode;
import org.junit.Test;

public class TestCloneTree {
@Test
public void testCloneTree() throws Exception {
QueryNode original = new BooleanQueryNode(Arrays.asList(
new FieldQueryNode(null, "a", 0, 0),
new FieldQueryNode(null, "b", 0, 0)));

sanityCheckQueryTree(original);

QueryNode cloned = original.cloneTree();

sanityCheckQueryTree(cloned);
}

private void sanityCheckQueryTree(QueryNode node) {
List children = node.getChildren();
if (children != null) {
for (QueryNode child : children) {
// Matching what Lucene is using in QueryNodeImpl itself.
//noinspection ObjectEquality
if (child.getParent() != node) {
throw new IllegalStateException("Sanity check failed for 
child: " + child + '\n' +
"  Parent is: " + 
child.getParent() + '\n' +
"  But we got to it via: " 
+ node);
}

sanityCheckQueryTree(child);
}
}
}
}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-28 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-9047:
-
Attachment: SOLR-9047.patch

Here's a patch that lets you specify the log4j configuration file via the  
LOG4J_PROPS environment variable.

I'd appreciate someone looking at the windows code since I can't test.

> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263327#comment-15263327
 ] 

Anshum Gupta commented on LUCENE-7265:
--

Thanks Steve!

> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263326#comment-15263326
 ] 

Anshum Gupta commented on LUCENE-7265:
--

We wouldn't need this particular fix for a release branch (branch_5_5, 
branch_6_0) so I'm not sure if we should be porting this change to those 
branches or not. We can, for the sake of keeping things in sync perhaps ?

> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263322#comment-15263322
 ] 

ASF subversion and git services commented on LUCENE-7265:
-

Commit 3ad0201e3ec6e3e4a509f566383f36493d7ad902 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3ad0201 ]

LUCENE-7265: Fix addVersion to cherry-pick downstream changes by using the 
change id


> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263317#comment-15263317
 ] 

ASF subversion and git services commented on LUCENE-7265:
-

Commit 1c1ad5e54c09f425b528cfab543f3973e4ef11a2 in lucene-solr's branch 
refs/heads/branch_5x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1c1ad5e ]

LUCENE-7265: Fix addVersion to cherry-pick downstream changes by using the 
change id


> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263318#comment-15263318
 ] 

ASF subversion and git services commented on LUCENE-7265:
-

Commit 54b873c2f9b401687c18010aee31c35bcab9660e in lucene-solr's branch 
refs/heads/master from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=54b873c ]

LUCENE-7265: Fix addVersion to cherry-pick downstream changes by using the 
change id


> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.1

2016-04-28 Thread Anshum Gupta
That's fixed (about to commit the fix from LUCENE-7265) thought.

While discussing the release process, Steve mentioned that we should
document the failing back-compat index test on the non-release branches due
to the missing index for the unreleased version.
On discussing further, he suggested that we instead move the process of
adding the version to non-release branches as a post-release task. This
way, we wouldn't have failing tests until the release goes through and the
back-compat indexes are checked in.

We still would have failing tests for the release branch but there's no way
around that.

So, I'll change the documentation to move those steps as post-release tasks.


On Thu, Apr 28, 2016 at 11:40 AM, Anshum Gupta 
wrote:

> Seems like LUCENE-6938 removed the merge logic that used the change id.
> Now the merge doesn't happen, and there's no logic that replaces it.
>
> I certainly can do with some help on this one.
>
> On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
> wrote:
>
>> Just wanted to make sure I wasn't missing something here again. While
>> trying to update the version on 5x, after having done that on 5.5, using
>> the addVersion.py script and following the instructions, the command
>> consistently fails. Here's what I've been trying to do:
>>
>> python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
>>
>>
>> Seems like addVersion.py is broken for minor version releases so I'd need
>> some help with someone who has a better understanding of python than I do.
>> I observed that 5.5.1 Version gets added to Version.java but also gets
>> marked as deprecated.
>>
>>
>>
>> On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
>> wrote:
>>
>>> Too much going on! Thanks Yonik.
>>> I'll start working on the RC now.
>>>
>>> NOTE: Please don't back port any more issues right now. In case of
>>> exceptions, please raise them here.
>>>
>>> On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:
>>>
 On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
 wrote:
 > Thanks. I'm waiting for the last back port of SOLR-8865.

 It should be already be there... I closed it yesterday.
 -Yonik

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>>
>>>
>>> --
>>> Anshum Gupta
>>>
>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[jira] [Commented] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263314#comment-15263314
 ] 

Steve Rowe commented on LUCENE-7265:


+1

> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated LUCENE-7265:
-
Attachment: LUCENE-7265.patch

> Fix addVersion to merge downstream changes by using the change id
> -
>
> Key: LUCENE-7265
> URL: https://issues.apache.org/jira/browse/LUCENE-7265
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: LUCENE-7265.patch
>
>
> LUCENE-6938 led to the remove of code that merges the downstream changes for 
> addition of a new version. That seems like an accidental removal and we 
> should add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263294#comment-15263294
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 7bc50ec1ee0333d2a294ad163ed0f4f3a9c453b6 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7bc50ec ]

Merge branch 'LUCENE-7241'


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7265) Fix addVersion to merge downstream changes by using the change id

2016-04-28 Thread Anshum Gupta (JIRA)
Anshum Gupta created LUCENE-7265:


 Summary: Fix addVersion to merge downstream changes by using the 
change id
 Key: LUCENE-7265
 URL: https://issues.apache.org/jira/browse/LUCENE-7265
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Anshum Gupta
Assignee: Anshum Gupta


LUCENE-6938 led to the remove of code that merges the downstream changes for 
addition of a new version. That seems like an accidental removal and we should 
add it back with a few changes so that it now uses git instead of svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-28 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7241:

Attachment: LUCENE-7241.patch

Debugged and partly tested patch.  Had to rework the tree structure due to 
massive oversight on my part, and also had some confusion about picking the 
right path.  Seems to work for basic polygons now though.

> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5806 - Still Failing!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5806/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth

Error Message:
Unexpected exception type, expected SSLHandshakeException

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SSLHandshakeException
at 
__randomizedtesting.SeedInfo.seed([92AB971A6B3DF963:412F7ED7F9F0A29F]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2682)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterJettys(TestMiniSolrCloudClusterSSL.java:283)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:185)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:147)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth(TestMiniSolrCloudClusterSSL.java:129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263218#comment-15263218
 ] 

Steve Rowe commented on SOLR-9028:
--

bq. Working on backport now ... some significant changes in the HttpClient 
config stuff between 6x and master due to SOLR-4509, so this won't be trivial.

Maybe SOLR-4509 will be backported to 6.x?  If so, couldn't backporting this 
issue wait for that?

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter or cause BadVersionException in Overseer

2016-04-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263124#comment-15263124
 ] 

Scott Blum commented on SOLR-9030:
--

Glad you wrote back to clarify, this got me to go do some reading as well and I 
agree with your conclusions.

> The 'downnode' command can trip asserts in ZkStateWriter or cause 
> BadVersionException in Overseer
> -
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: master, 6.1
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263122#comment-15263122
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-215576790
  
Looking good, a little more high-level feedback.  @shalinmangar I think you 
should take a look also.

Have you run the tests extensively?  The first time I ran I got a failure, 
but after that it's been fairly reliable, but I haven't beasted.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-215576790
  
Looking good, a little more high-level feedback.  @shalinmangar I think you 
should take a look also.

Have you run the tests extensively?  The first time I ran I got a failure, 
but after that it's been fairly reliable, but I haven't beasted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263120#comment-15263120
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61510208
  
--- Diff: 
solr/test-framework/src/java/org/apache/solr/cloud/MiniSolrCloudCluster.java ---
@@ -348,7 +358,13 @@ public JettySolrRunner stopJettySolrRunner(int index) 
throws Exception {
 return jetty;
   }
 
-  protected JettySolrRunner startJettySolrRunner(JettySolrRunner jetty) 
throws Exception {
+  /**
+   * Add a previously stopped node back to the cluster
+   * @param jetty a {@link JettySolrRunner} previously returned by {@link 
#stopJettySolrRunner(int)}
+   * @return the started node
+   * @throws Exception on error
+   */
+  public JettySolrRunner startJettySolrRunner(JettySolrRunner jetty) 
throws Exception {
--- End diff --

Are the changes in this file related to this PR?


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263119#comment-15263119
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61510100
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
--- End diff --

If we're not going to be firing events on all watchers whenever live_nodes 
changes, we should be very clear about this.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61510208
  
--- Diff: 
solr/test-framework/src/java/org/apache/solr/cloud/MiniSolrCloudCluster.java ---
@@ -348,7 +358,13 @@ public JettySolrRunner stopJettySolrRunner(int index) 
throws Exception {
 return jetty;
   }
 
-  protected JettySolrRunner startJettySolrRunner(JettySolrRunner jetty) 
throws Exception {
+  /**
+   * Add a previously stopped node back to the cluster
+   * @param jetty a {@link JettySolrRunner} previously returned by {@link 
#stopJettySolrRunner(int)}
+   * @return the started node
+   * @throws Exception on error
+   */
+  public JettySolrRunner startJettySolrRunner(JettySolrRunner jetty) 
throws Exception {
--- End diff --

Are the changes in this file related to this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61510100
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
--- End diff --

If we're not going to be firing events on all watchers whenever live_nodes 
changes, we should be very clear about this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263112#comment-15263112
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61509937
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  the predicate to call on state changes
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
+  throws InterruptedException, 

[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61509937
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  the predicate to call on state changes
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
+  throws InterruptedException, TimeoutException {
+
+final CountDownLatch latch = new CountDownLatch(1);
+
+CollectionStateWatcher watcher = new CollectionStateWatcher() {
+  @Override
+  public void onStateChanged(Set liveNodes, 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263108#comment-15263108
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61509699
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

I think we could tighten this code up to ensure that predicate never gets 
call concurrently from two different threads at the same time, this would 
simplify things for clients and handle the case of calling it twice when it 
succeeds immediately.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  

[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61509699
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

I think we could tighten this code up to ensure that predicate never gets 
call concurrently from two different threads at the same time, this would 
simplify things for clients and handle the case of calling it twice when it 
succeeds immediately.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 101 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/101/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([EA0D46E28961177D:32406BB57EBCB2DD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263094#comment-15263094
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61508998
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  the predicate to call on state changes
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
--- End diff --

@shalinmangar this is 

[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61508998
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  the predicate to call on state changes
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(final String collection, long wait, TimeUnit 
unit, CollectionStatePredicate predicate)
--- End diff --

@shalinmangar this is what I was referring to, I know you're working on 
getting Overseer to not peg ZK polling for state changes on unwatched 
collections, this PR provides an easy mechanism to temporarily watch 
collections of interest.


---
If your project 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263087#comment-15263087
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61507961
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -131,6 +132,19 @@
 
   private final Runnable securityNodeListener;
 
+  private Map collectionWatches = new 
ConcurrentHashMap<>();
--- End diff --

The reason I made a comment about using the concrete type here is that it 
makes it much easier to work with as a developer.  When you can see that the 
static type of this variable is ConcurrentHashMap, that helps you evaluate the 
code that touches it.

For example, when you use IDE features to 'click through' a method call or 
view the javadoc on a called method, you get the ConcurrentHashMap version of 
the javadoc/method instead of the Map version, which helps you more easily 
evaluate the correctness.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61507961
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -131,6 +132,19 @@
 
   private final Runnable securityNodeListener;
 
+  private Map collectionWatches = new 
ConcurrentHashMap<>();
--- End diff --

The reason I made a comment about using the concrete type here is that it 
makes it much easier to work with as a developer.  When you can see that the 
static type of this variable is ConcurrentHashMap, that helps you evaluate the 
code that touches it.

For example, when you use IDE features to 'click through' a method call or 
view the javadoc on a called method, you get the ConcurrentHashMap version of 
the javadoc/method instead of the Map version, which helps you more easily 
evaluate the correctness.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61507382
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -484,6 +498,12 @@ private void refreshLegacyClusterState(Watcher watcher)
 }
 this.legacyCollectionStates = loadedData.getCollectionStates();
 this.legacyClusterStateVersion = stat.getVersion();
+for (Map.Entry entry : 
this.legacyCollectionStates.entrySet()) {
+  if (entry.getValue().isLazilyLoaded() == false) {
+// a watched collection - trigger notifications
+notifyStateWatchers(entry.getKey(), entry.getValue().get());
+  }
+}
--- End diff --

I think it would add a lot of value here to actually check differences.  
There really wouldn't be much computational work since you could restrict it to 
watched collections.  Something like:

```
for (Map.Entry watchEntry : 
this.collectionWatches.entrySet()) {
  String coll = watchEntry.getKey();
  CollectionWatch collWatch = watchEntry.getValue();
  DocCollection newState = 
this.legacyCollectionStates.get(coll).get();
  if (!collWatch.stateWatchers.isEmpty()
  && !Objects.equals(oldCollectionStates.get(coll).get(), 
newState)) {
notifyStateWatchers(coll, newState);
  }
}
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263080#comment-15263080
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61507382
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -484,6 +498,12 @@ private void refreshLegacyClusterState(Watcher watcher)
 }
 this.legacyCollectionStates = loadedData.getCollectionStates();
 this.legacyClusterStateVersion = stat.getVersion();
+for (Map.Entry entry : 
this.legacyCollectionStates.entrySet()) {
+  if (entry.getValue().isLazilyLoaded() == false) {
+// a watched collection - trigger notifications
+notifyStateWatchers(entry.getKey(), entry.getValue().get());
+  }
+}
--- End diff --

I think it would add a lot of value here to actually check differences.  
There really wouldn't be much computational work since you could restrict it to 
watched collections.  Something like:

```
for (Map.Entry watchEntry : 
this.collectionWatches.entrySet()) {
  String coll = watchEntry.getKey();
  CollectionWatch collWatch = watchEntry.getValue();
  DocCollection newState = 
this.legacyCollectionStates.get(coll).get();
  if (!collWatch.stateWatchers.isEmpty()
  && !Objects.equals(oldCollectionStates.get(coll).get(), 
newState)) {
notifyStateWatchers(coll, newState);
  }
}
```



> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+115) - Build # 16607 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16607/
Java: 64bit/jdk-9-ea+115 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth

Error Message:
Unexpected exception type, expected SSLHandshakeException

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SSLHandshakeException
at 
__randomizedtesting.SeedInfo.seed([6F2E79906F4E6D38:BCAA905DFD8336C4]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2682)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterJettys(TestMiniSolrCloudClusterSSL.java:283)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:185)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:147)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth(TestMiniSolrCloudClusterSSL.java:129)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263038#comment-15263038
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504824
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java ---
@@ -210,6 +213,38 @@ public Replica getReplica(String coreNodeName) {
 return null;
   }
 
+  /**
+   * Check that all replicas in a collection are live
+   *
+   * @see CollectionStatePredicate
+   */
+  public static boolean isFullyActive(Set liveNodes, DocCollection 
collectionState) {
+Objects.requireNonNull(liveNodes);
+if (collectionState == null)
+  return false;
+for (Slice slice : collectionState) {
+  for (Replica replica : slice) {
+if (replica.isActive(liveNodes) == false)
+  return false;
+  }
+}
+return true;
+  }
+
+  /**
+   * Returns true if the passed in DocCollection is null
+   *
+   * @see CollectionStatePredicate
+   */
+  public static boolean isDeleted(Set liveNodes, DocCollection 
collectionState) {
+return collectionState == null;
+  }
--- End diff --

maybe `exists`? isDeleted implies that it used to exist, but it may have 
never been created


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504824
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java ---
@@ -210,6 +213,38 @@ public Replica getReplica(String coreNodeName) {
 return null;
   }
 
+  /**
+   * Check that all replicas in a collection are live
+   *
+   * @see CollectionStatePredicate
+   */
+  public static boolean isFullyActive(Set liveNodes, DocCollection 
collectionState) {
+Objects.requireNonNull(liveNodes);
+if (collectionState == null)
+  return false;
+for (Slice slice : collectionState) {
+  for (Replica replica : slice) {
+if (replica.isActive(liveNodes) == false)
+  return false;
+  }
+}
+return true;
+  }
+
+  /**
+   * Returns true if the passed in DocCollection is null
+   *
+   * @see CollectionStatePredicate
+   */
+  public static boolean isDeleted(Set liveNodes, DocCollection 
collectionState) {
+return collectionState == null;
+  }
--- End diff --

maybe `exists`? isDeleted implies that it used to exist, but it may have 
never been created


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263034#comment-15263034
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504670
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
+public interface CollectionStateWatcher {
+
+  /**
+   * Called when the collection we are registered against has a change of 
state
+   *
+   * Note that, due to the way Zookeeper watchers are implemented, a 
single call may be
+   * the result of several state changes
+   *
+   * A watcher is unregistered after it has been called once.  To make a 
watcher persistent,
+   * implementors should re-register during this call.
+   *
+   * @param liveNodes   the set of live nodes
+   * @param collectionState the new collection state
+   */
+  void onStateChanged(Set liveNodes, DocCollection 
collectionState);
+
+}
--- End diff --

I just want to toss out an idea here after looking at this some more.  I 
notice that CollectionStateWatcher and CollectionStatePredicate are nearly 
identical.  What would you think about combining the two into a single 
interface?

The signature could be e.g.:

bool stateChanged(liveNodes, collectionState)

In a watch context, the return value means "keep watching?".  So return 
true to reset the watcher and continue getting updates, or return false to stop 
watching.

In a predicate context, the return value means "keep waiting?".  So return 
true to keep waiting, or return false if you've finally seen what you were 
waiting for.

They'll both have the same semantic meaning either way.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504670
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
+public interface CollectionStateWatcher {
+
+  /**
+   * Called when the collection we are registered against has a change of 
state
+   *
+   * Note that, due to the way Zookeeper watchers are implemented, a 
single call may be
+   * the result of several state changes
+   *
+   * A watcher is unregistered after it has been called once.  To make a 
watcher persistent,
+   * implementors should re-register during this call.
+   *
+   * @param liveNodes   the set of live nodes
+   * @param collectionState the new collection state
+   */
+  void onStateChanged(Set liveNodes, DocCollection 
collectionState);
+
+}
--- End diff --

I just want to toss out an idea here after looking at this some more.  I 
notice that CollectionStateWatcher and CollectionStatePredicate are nearly 
identical.  What would you think about combining the two into a single 
interface?

The signature could be e.g.:

bool stateChanged(liveNodes, collectionState)

In a watch context, the return value means "keep watching?".  So return 
true to reset the watcher and continue getting updates, or return false to stop 
watching.

In a predicate context, the return value means "keep waiting?".  So return 
true to keep waiting, or return false if you've finally seen what you were 
waiting for.

They'll both have the same semantic meaning either way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-04-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263029#comment-15263029
 ] 

Robert Muir commented on LUCENE-7262:
-

I think the problem at LUCENE-7051 time was, that points didnt have any 
statistics. Now they do, so i think our job is easier.

> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504017
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStatePredicate.java 
---
@@ -0,0 +1,41 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Interface to determine if a collection state matches a required state
+ *
+ * @see ZkStateReader#waitForState(String, long, TimeUnit, 
CollectionStatePredicate)
+ */
+public interface CollectionStatePredicate {
+
+  /**
+   * Check the collection state matches a required state
+   *
+   * The collectionState parameter may be null if the collection does not 
exist
+   * or has been deleted
--- End diff --

This wants to be `@param collectionState the current collection state, or 
null if the collection doesn't exist or has been deleted`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263020#comment-15263020
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61504017
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStatePredicate.java 
---
@@ -0,0 +1,41 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Interface to determine if a collection state matches a required state
+ *
+ * @see ZkStateReader#waitForState(String, long, TimeUnit, 
CollectionStatePredicate)
+ */
+public interface CollectionStatePredicate {
+
+  /**
+   * Check the collection state matches a required state
+   *
+   * The collectionState parameter may be null if the collection does not 
exist
+   * or has been deleted
--- End diff --

This wants to be `@param collectionState the current collection state, or 
null if the collection doesn't exist or has been deleted`


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8208) DocTransformer executes sub-queries

2016-04-28 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263016#comment-15263016
 ] 

Mikhail Khludnev commented on SOLR-8208:


[~caomanhdat], your last patch is awesome!!! 
for the reference, attempt to request cloud collection via EmbeddedSolrServer 
were too naive:
{noformat}
ERROR (qtp1310704163-67) [n:127.0.0.1:65356_ c:people s:shard1 r:core_node2 
x:people_shard1_replica1] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: No such core: departments
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:149)
at 
org.apache.solr.response.transform.LocalSubQueryAugmenter.transform(SubQueryAugmenterFactory.java:239)
at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:146)
at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:1)
{noformat}

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263015#comment-15263015
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61503724
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java ---
@@ -572,6 +574,40 @@ public void downloadConfig(String configName, Path 
downloadPath) throws IOExcept
 zkStateReader.getConfigManager().downloadConfigDir(configName, 
downloadPath);
   }
 
+  /**
+   * Block until a collection state matches a predicate, or a timeout
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  a {@link CollectionStatePredicate} to check the 
collection state
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(String collection, long wait, TimeUnit unit, 
CollectionStatePredicate predicate)
+  throws InterruptedException, TimeoutException {
+connect();
+zkStateReader.waitForState(collection, wait, unit, predicate);
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the cluster state 
for a collection changes
+   *
+   * Note that the watcher is unregistered after it has been called once.  
To make a watcher persistent,
+   * it should re-register itself in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * call
+   *
+   * @param collection the collection to watch
+   * @param watchera watcher that will be called when the state changes
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher watcher) {
+connect();
+zkStateReader.registerCollectionStateWatcher(collection, watcher);
+  }
+
--- End diff --

I would note that getZkStateReader() is a public method, is there value in 
adding these forwarding methods?


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-28 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61503724
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java ---
@@ -572,6 +574,40 @@ public void downloadConfig(String configName, Path 
downloadPath) throws IOExcept
 zkStateReader.getConfigManager().downloadConfigDir(configName, 
downloadPath);
   }
 
+  /**
+   * Block until a collection state matches a predicate, or a timeout
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
+   *
+   * @param collection the collection to watch
+   * @param wait   how long to wait
+   * @param unit   the units of the wait parameter
+   * @param predicate  a {@link CollectionStatePredicate} to check the 
collection state
+   * @throws InterruptedException on interrupt
+   * @throws TimeoutException on timeout
+   */
+  public void waitForState(String collection, long wait, TimeUnit unit, 
CollectionStatePredicate predicate)
+  throws InterruptedException, TimeoutException {
+connect();
+zkStateReader.waitForState(collection, wait, unit, predicate);
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the cluster state 
for a collection changes
+   *
+   * Note that the watcher is unregistered after it has been called once.  
To make a watcher persistent,
+   * it should re-register itself in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * call
+   *
+   * @param collection the collection to watch
+   * @param watchera watcher that will be called when the state changes
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher watcher) {
+connect();
+zkStateReader.registerCollectionStateWatcher(collection, watcher);
+  }
+
--- End diff --

I would note that getZkStateReader() is a public method, is there value in 
adding these forwarding methods?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-28 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263000#comment-15263000
 ] 

Hrishikesh Gadre commented on SOLR-9038:


[~dsmiley] Sorry for the confusion. Yes it make sense to defer any "copying" 
behavior to SOLR-5750. We can customize "backup functionality" to incorporate 
this (e.g. ability to back up previously created snapshot). But from the 
perspective of this JIRA, let's focus on indexed data only.

As you mentioned in your earlier comments, we can use the "commit" workflow to 
create a named snapshot. But we still need a way to list the previously created 
snapshots and an ability to delete the snapshots. Here the "delete snapshot" 
functionality can just remove the corresponding index commit metadata. This way 
during subsequent index merge, Lucene can perform the cleanup.

Does that make sense? If yes, I do have following questions
- How would the "list snapshots" and "delete snapshot" APIs look like? Do we 
need to provide them just at the core level or at the collection level as well?
-  Would we allow "destructive" operations (e.g. delete replica/shard) when we 
have one or more snapshots?
- It seems to me that the "commit" request will be executed by all replicas for 
a given collection. What should happen when a "commit" request can not be 
processed by a replica (since it may be down) ? We may need to ensure that 
during the replica "recovery" it also fetches the information about commit 
metadata.

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 18 - Still Failing

2016-04-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/18/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([268012AECF9032D8:9F01C471E37A3652]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:754)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:747)
... 40 more




Build Log:
[...truncated 11524 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 549 - Still Failing!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/549/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([9972DD1CA212E369:386A0FE3C887F55]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:773)
... 40 more




Build Log:
[...truncated 11442 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   

[jira] [Commented] (LUCENE-6968) LSH Filter

2016-04-28 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262918#comment-15262918
 ] 

Andy Hind commented on LUCENE-6968:
---

[~yo...@apache.org] has murmurhash3_x64_128 here 
https://github.com/yonik/java_util/blob/master/src/util/hash/MurmurHash3.java


> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.patch, 
> LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262910#comment-15262910
 ] 

Hoss Man commented on SOLR-9028:


Note: one small diff between the last patch and this commit is that in the 
patch i had cranked up the odds of randomizing the trySslClientAuth boolean in 
in SolrTestCaseJ4 ... i dialed that back down to the existing odds before 
committing.

Working on backport now ... some significant changes in the HttpClient config 
stuff between 6x and master due to  SOLR-4509, so this won't be trivial.

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-04-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262895#comment-15262895
 ] 

Kevin Risden edited comment on SOLR-8593 at 4/28/16 8:20 PM:
-

Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for aggregationMode (facets and map_reduce) and their parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite
* figure out avg(int) problem in tests.
** avg(int) returns int by design. need to figure out if casting is right for 
the tests
* figure out sort asc by default in tests
** this currently doesn't sort properly even though I thought that the right 
approach was sort on _version_.
* handle added dependencies properly and upgrade to latest Calcite/Avatica?


was (Author: risdenk):
Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for aggregationMode (facets and map_reduce) and their parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite
* figure out avg(int) problem in tests.
** avg(int) returns int by design. need to figure out if casting is right for 
the tests
* figure out sort asc by default in tests
** this currently doesn't sort properly even though I thought that the right 
approach was sort on _version_.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262906#comment-15262906
 ] 

ASF subversion and git services commented on SOLR-9028:
---

Commit 791d1e73933a88ef78a06a529d5dcb2fd9e01807 in lucene-solr's branch 
refs/heads/master from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=791d1e7 ]

SOLR-9028: Fixed some test related bugs preventing SSL + ClientAuth from ever 
being tested


> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-7260) StandardQueryParser is over 100 times slower in v5 compared to v3

2016-04-28 Thread Ivan Mamontov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mamontov updated LUCENE-7260:
--
Comment: was deleted

(was: I do not recommend to use yourkit anywhere especially in microbenchmarks. 
According to JMC(-XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions 
-XX:+DebugNonSafepoints -XX:+FlightRecorder 
-XX:StartFlightRecording=duration=60s,filename=myrecording.jfr) the hottest 
method is 
org.apache.lucene.queryparser.flexible.core.nodes.QueryNodeImpl.removeChildren(QueryNode)

See details here https://issues.apache.org/jira/browse/LUCENE-5099)

> StandardQueryParser is over 100 times slower in v5 compared to v3
> -
>
> Key: LUCENE-7260
> URL: https://issues.apache.org/jira/browse/LUCENE-7260
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 5.4.1
> Environment: Java 8u51
>Reporter: Trejkaz
>  Labels: performance
>
> The following test code times parsing a large query.
> {code}
> import org.apache.lucene.analysis.KeywordAnalyzer;
> //import org.apache.lucene.analysis.core.KeywordAnalyzer;
> import org.apache.lucene.queryParser.standard.StandardQueryParser;
> //import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
> import org.apache.lucene.search.BooleanQuery;
> public class LargeQueryTest {
> public static void main(String[] args) throws Exception {
> BooleanQuery.setMaxClauseCount(50_000);
> StringBuilder builder = new StringBuilder(50_000*10);
> builder.append("id:( ");
> boolean first = true;
> for (int i = 0; i < 50_000; i++) {
> if (first) {
> first = false;
> } else {
> builder.append(" OR ");
> }
> builder.append(String.valueOf(i));
> }
> builder.append(" )");
> String queryString = builder.toString();
> StandardQueryParser parser2 = new StandardQueryParser(new 
> KeywordAnalyzer());
> for (int i = 0; i < 10; i++) {
> long t0 = System.currentTimeMillis();
> parser2.parse(queryString, "nope");
> long t1 = System.currentTimeMillis();
> System.out.println(t1-t0);
> }
> }
> }
> {code}
> For Lucene 3.6.2, the timings settle down to 200~300 with the fastest being 
> 207.
> For Lucene 5.4.1, the timings settle down to 2~3 with the fastest 
> being 22444.
> So at some point, some change made the query parser 100 times slower. I would 
> suspect that it has something to do with how the list of children is now 
> handled. Every time someone gets the children, it copies the list. Every time 
> someone sets the children, it walks through to detach parent references and 
> then reattaches them all again.
> If it were me, I would probably make these collections immutable so that I 
> didn't have to defensively copy them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-04-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262895#comment-15262895
 ] 

Kevin Risden edited comment on SOLR-8593 at 4/28/16 8:15 PM:
-

Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for aggregationMode (facets and map_reduce) and their parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite
* figure out avg(int) problem in tests.
** avg(int) returns int by design. need to figure out if casting is right for 
the tests
* figure out sort asc by default in tests
** this currently doesn't sort properly even though I thought that the right 
approach was sort on _version_.


was (Author: risdenk):
Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for facets and map_reduce as parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-04-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262895#comment-15262895
 ] 

Kevin Risden commented on SOLR-8593:


Ok made a bunch of progress on the jira/solr-8593 branch:
* cleaned up the tests so that there are only a few remaining items to address 
(outlined below)
* added support for float/double types
* fixed a CloudSolrClient resource leak

Left to do:
* Add support for facets and map_reduce as parameters
* ensure the pushdown to facets/map_reduce works correctly
* figure out the CloudSolrClient cache (currently not caching and creating new 
per stream)
* Push down aggregates to Solr
* add tests to ensure the proper plan is being generated by Calcite

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262856#comment-15262856
 ] 

Joel Bernstein commented on SOLR-9027:
--

Good point, I may take one more pass at refactoring before back porting. 

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262857#comment-15262857
 ] 

Yonik Seeley commented on LUCENE-7264:
--

Ah, thanks for that reference... need to update my mental models ;-)

> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-04-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262848#comment-15262848
 ] 

Adrien Grand commented on LUCENE-7262:
--

Good ideas. I added it as close as it was before LUCENE-7051 but I will give 
these ideas a try.

> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9034) Atomic updates not work with CopyField

2016-04-28 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9034.

Resolution: Fixed

Committed.  Thanks!

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262831#comment-15262831
 ] 

Adrien Grand commented on LUCENE-7264:
--

I benchmarked it using IndexAndSearchOpenStreetMaps by temporarily using 
DocIdSetBuilder instead of MatchingPoints (I did not use luceneutil since its 
numeric range queries match too many docs). The QPS went from 33.4 (old 
DocIdSetBuilder.add) to 35.0 with this patch.

In that case I think it works well since the base class is an abstract class 
and there are only two impls, [which the JVM can efficiently 
optimize|http://shipilev.net/blog/2015/black-magic-method-dispatch/#_two_types].
 (For the record, most queries of the benchmark upgrade to a bitset so both 
impls are used.)

> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9034) Atomic updates not work with CopyField

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262830#comment-15262830
 ] 

ASF subversion and git services commented on SOLR-9034:
---

Commit 21aea6f606f81b1b4c45fa41501f33744f2b887a in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21aea6f ]

SOLR-9034: fix atomic updates for copyField w/ docValues


> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9034) Atomic updates not work with CopyField

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262826#comment-15262826
 ] 

ASF subversion and git services commented on SOLR-9034:
---

Commit c897917c718eef75d66c5d0006f409d5c95260c7 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c897917 ]

SOLR-9034: fix atomic updates for copyField w/ docValues


> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+115) - Build # 521 - Failure!

2016-04-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/521/
Java: 32bit/jdk-9-ea+115 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([D100ABB5D6280CC7:A73EB4C6971FA1E8]:0)
at sun.nio.ch.Net.bind0(java.base@9-ea/Native Method)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:446)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:438)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(java.base@9-ea/ServerSocketChannelImpl.java:225)
at 
sun.nio.ch.ServerSocketAdaptor.bind(java.base@9-ea/ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:384)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:327)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:352)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:443)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262823#comment-15262823
 ] 

David Smiley commented on SOLR-9027:


bq. I dug into how the TermContext is being used elsewhere in Lucene. What I 
found was that the TermQuery is holding onto the TermContext and seems to be 
relying on wrapper queries to manage it properly. This is marked as an expert 
usage. The CommonTermsQuery uses this constructor. So it does appear that 
holding onto the TermContext is OK, as long as it's handled properly.

Okay.  AFAICT, the only reason TermQuery.perReaderTermState exists is because 
_some_ callers just so happen to already have the TermContext, so this saves 
getting it later.  In the case of GraphTermsQuery the QParser does not and has 
no reason to get the TermContext in advance.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-04-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262817#comment-15262817
 ] 

Robert Muir commented on LUCENE-7262:
-

So this means postings still calls cardinality()? Why wouldn't it do the same? 
I'm a bit concerned with each query tracking its own estimate (and having the 
formula/stats pulling etc duplicated everywhere). 

This is why when looking at MatchingPoints, it pulls the stats it needs. but 
alternatively DocIDSetBuilder could take parameters of sumDocFreq, maxDoc, 
docCount and do this itself. Points would pass size() for sumDocFreq, its the 
equivalent there.

In other words, i see providing a good cost() as the responsibility of 
DocIDSetBuilder. The only thing impl-specific is how to get sumDocFreq and 
docCount (e.g. Terms.sumDocFreq/docCount vs PointValues.size/docCount).

> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262777#comment-15262777
 ] 

Yonik Seeley commented on LUCENE-7264:
--

One possible downside to this change is that it changes a predictable branch 
(that is handled at the CPU level) into a method call... which if it's not 
monomorphic can be decompiled at the point of the call and thus end up slower 
(method call vs predictable branch).  Will be interesting to see the benchmark 
results.

> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262777#comment-15262777
 ] 

Yonik Seeley edited comment on LUCENE-7264 at 4/28/16 7:17 PM:
---

One possible downside to this change is that it changes a predictable branch 
(that is handled at the CPU level) into a method call... which if it's not 
monomorphic can be un-inlined at the point of the call and thus end up slower 
(method call vs predictable branch).  Will be interesting to see the benchmark 
results.


was (Author: ysee...@gmail.com):
One possible downside to this change is that it changes a predictable branch 
(that is handled at the CPU level) into a method call... which if it's not 
monomorphic can be decompiled at the point of the call and thus end up slower 
(method call vs predictable branch).  Will be interesting to see the benchmark 
results.

> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 51 - Still Failing

2016-04-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/51/

3 tests failed.
FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#6): CloudJettyRunner 
[url=http://127.0.0.1:52780/lfu/collection1]

Stack Trace:
java.lang.AssertionError: Unable to restart (#6): CloudJettyRunner 
[url=http://127.0.0.1:52780/lfu/collection1]
at 
__randomizedtesting.SeedInfo.seed([6C1927A853BBAADC:E44D1872FD47C724]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:103)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7264:
-
Attachment: LUCENE-7264.patch

This patch changes the DocIdSetBuilder API. add() is gone. Instead, grow() 
returns a new BulkAdder object that can be used to add up to the number of 
documents that have been passed to the grow() method. This helps save 
conditionals since the add method never needs to care about whether the buffer 
is large enough or whether to upgrade to a bitset since everything is done 
up-front in the grow() call.

> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-04-28 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7264:


 Summary: Fewer conditionals in DocIdSetBuilder.add
 Key: LUCENE-7264
 URL: https://issues.apache.org/jira/browse/LUCENE-7264
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.1

2016-04-28 Thread Anshum Gupta
Seems like LUCENE-6938 removed the merge logic that used the change id. Now
the merge doesn't happen, and there's no logic that replaces it.

I certainly can do with some help on this one.

On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
wrote:

> Just wanted to make sure I wasn't missing something here again. While
> trying to update the version on 5x, after having done that on 5.5, using
> the addVersion.py script and following the instructions, the command
> consistently fails. Here's what I've been trying to do:
>
> python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
>
>
> Seems like addVersion.py is broken for minor version releases so I'd need
> some help with someone who has a better understanding of python than I do.
> I observed that 5.5.1 Version gets added to Version.java but also gets
> marked as deprecated.
>
>
>
> On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
> wrote:
>
>> Too much going on! Thanks Yonik.
>> I'll start working on the RC now.
>>
>> NOTE: Please don't back port any more issues right now. In case of
>> exceptions, please raise them here.
>>
>> On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:
>>
>>> On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
>>> wrote:
>>> > Thanks. I'm waiting for the last back port of SOLR-8865.
>>>
>>> It should be already be there... I closed it yesterday.
>>> -Yonik
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


Re: Lucene/Solr 5.5.1

2016-04-28 Thread Anshum Gupta
Just wanted to make sure I wasn't missing something here again. While
trying to update the version on 5x, after having done that on 5.5, using
the addVersion.py script and following the instructions, the command
consistently fails. Here's what I've been trying to do:

python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1


Seems like addVersion.py is broken for minor version releases so I'd need
some help with someone who has a better understanding of python than I do.
I observed that 5.5.1 Version gets added to Version.java but also gets
marked as deprecated.



On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
wrote:

> Too much going on! Thanks Yonik.
> I'll start working on the RC now.
>
> NOTE: Please don't back port any more issues right now. In case of
> exceptions, please raise them here.
>
> On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:
>
>> On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
>> wrote:
>> > Thanks. I'm waiting for the last back port of SOLR-8865.
>>
>> It should be already be there... I closed it yesterday.
>> -Yonik
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[jira] [Commented] (LUCENE-7260) StandardQueryParser is over 100 times slower in v5 compared to v3

2016-04-28 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262648#comment-15262648
 ] 

Trejkaz commented on LUCENE-7260:
-

Meanwhile I threw some hashCode() calls in on the query object, in this sort of 
fashion:
{code}
int temp = 0;
for (int i = 0; i < 10; i++)
{
long t0 = System.currentTimeMillis();
Query query = parser2.parse(queryString, "nope");
long t1 = System.currentTimeMillis();
temp ^= query.hashCode();
System.out.println("ignore: " + temp);
System.out.println("dt: " + (t1-t0));
}
{code}

I'll take the ignore lines out because it just adds noise. Both tests run 
faster today but it looks like someone updated the JVM we're running against, 
so it could be related to that. These timings are for JDK 8u92. Interesting how 
whatever they did in the JVM has made one of the tests 1/3 faster!

3.6:
{noformat}
dt: 996
dt: 659
dt: 286
dt: 393
dt: 240
dt: 257
dt: 187
dt: 529
dt: 263
dt: 183
{noformat}

5.4:
{noformat}
dt: 20213
dt: 16613
dt: 15311
dt: 14633
dt: 14925
dt: 14571
dt: 14008
dt: 16320
dt: 15211
dt: 14881
{noformat}


> StandardQueryParser is over 100 times slower in v5 compared to v3
> -
>
> Key: LUCENE-7260
> URL: https://issues.apache.org/jira/browse/LUCENE-7260
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 5.4.1
> Environment: Java 8u51
>Reporter: Trejkaz
>  Labels: performance
>
> The following test code times parsing a large query.
> {code}
> import org.apache.lucene.analysis.KeywordAnalyzer;
> //import org.apache.lucene.analysis.core.KeywordAnalyzer;
> import org.apache.lucene.queryParser.standard.StandardQueryParser;
> //import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
> import org.apache.lucene.search.BooleanQuery;
> public class LargeQueryTest {
> public static void main(String[] args) throws Exception {
> BooleanQuery.setMaxClauseCount(50_000);
> StringBuilder builder = new StringBuilder(50_000*10);
> builder.append("id:( ");
> boolean first = true;
> for (int i = 0; i < 50_000; i++) {
> if (first) {
> first = false;
> } else {
> builder.append(" OR ");
> }
> builder.append(String.valueOf(i));
> }
> builder.append(" )");
> String queryString = builder.toString();
> StandardQueryParser parser2 = new StandardQueryParser(new 
> KeywordAnalyzer());
> for (int i = 0; i < 10; i++) {
> long t0 = System.currentTimeMillis();
> parser2.parse(queryString, "nope");
> long t1 = System.currentTimeMillis();
> System.out.println(t1-t0);
> }
> }
> }
> {code}
> For Lucene 3.6.2, the timings settle down to 200~300 with the fastest being 
> 207.
> For Lucene 5.4.1, the timings settle down to 2~3 with the fastest 
> being 22444.
> So at some point, some change made the query parser 100 times slower. I would 
> suspect that it has something to do with how the list of children is now 
> handled. Every time someone gets the children, it copies the list. Every time 
> someone sets the children, it walks through to detach parent references and 
> then reattaches them all again.
> If it were me, I would probably make these collections immutable so that I 
> didn't have to defensively copy them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262643#comment-15262643
 ] 

Steve Rowe commented on SOLR-9028:
--

On OS X from {{solr/}}, {{ant test}} passes for me with the latest patch on 
master.  I also ran the other new test {{TestSSLRandomization}} by itself, and 
it passed.

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6865) BooleanQuery2ModifierNodeProcessor breaks the query node hierarchy

2016-04-28 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz closed LUCENE-6865.
---
   Resolution: Duplicate
Fix Version/s: 5.3

Verified as another duplicate of LUCENE-5805. Fixed by the same fix of course.


> BooleanQuery2ModifierNodeProcessor breaks the query node hierarchy
> --
>
> Key: LUCENE-6865
> URL: https://issues.apache.org/jira/browse/LUCENE-6865
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Trejkaz
> Fix For: 5.3
>
>
> We discovered that one of our own implementations of QueryNodeProcessor was 
> seeing node.getParent() returning null for nodes other than the root of the 
> query tree.
> I put a diagnostic processor around every processor which runs and found that 
> BooleanQuery2ModifierNodeProcessor (and possibly others, although it isn't 
> clear) are mysteriously setting some of the node references to null.
> Example query tree before:
> {noformat}
> GroupQueryNode, parent = null
>   WithinQueryNode, parent = GroupQueryNode
> QuotedFieldQueryNode, parent = WithinQueryNode
> GroupQueryNode, parent = WithinQueryNode
>   AndQueryNode, parent = GroupQueryNode
> GroupQueryNode, parent = AndQueryNode
>   OrQueryNode, parent = GroupQueryNode
> QuotedFieldQueryNode, parent = OrQueryNode
> QuotedFieldQueryNode, parent = OrQueryNode
> GroupQueryNode, parent = AndQueryNode
>   OrQueryNode, parent = GroupQueryNode
> QuotedFieldQueryNode, parent = OrQueryNode
> QuotedFieldQueryNode, parent = OrQueryNode
> {noformat}
> And after BooleanQuery2ModifierNodeProcessor.process():
> {noformat}
> GroupQueryNode, parent = null
>   WithinQueryNode, parent = GroupQueryNode
> QuotedFieldQueryNode, parent = WithinQueryNode
> GroupQueryNode, parent = WithinQueryNode
>   AndQueryNode, parent = GroupQueryNode
> BooleanModifierNode, parent = AndQueryNode
>   GroupQueryNode, parent = null
> OrQueryNode, parent = GroupQueryNode
>   QuotedFieldQueryNode, parent = OrQueryNode
>   QuotedFieldQueryNode, parent = OrQueryNode
> BooleanModifierNode, parent = AndQueryNode
>   GroupQueryNode, parent = null
> OrQueryNode, parent = GroupQueryNode
>   QuotedFieldQueryNode, parent = OrQueryNode
>   QuotedFieldQueryNode, parent = OrQueryNode
> {noformat}
> Looking at QueryNodeImpl, there is a lot of fiddly logic in there. Removing 
> children can trigger setting the parent to null, but setting the parent can 
> also trigger the child removing itself, so it's near impossible to figure out 
> why this could be happening, but I'm closing in on it at least. My initial 
> suspicion is that cloneTree() is responsible, because ironically the number 
> of failures of this sort _increase_ if I try to use cloneTree to defend 
> against mutability bugs.
> The fix I have come up with is to clone the whole API but making QueryNode 
> immutable. This removes the ability for processors to mess with nodes that 
> don't belong to them, but also obviates the need for a parent reference in 
> the first place, which I think is the entire source of the problem - keeping 
> the parent and child in sync correctly is obviously going to be hard, and 
> indeed we find that there is at least one bug of this sort lurking in there.
> But even if we rewrite it, I figured I would report the issue so that maybe 
> it can be fixed for others.
> Code to use for diagnostics:
> {code}
> import java.util.List;
> import org.apache.lucene.queryparser.flexible.core.QueryNodeException;
> import org.apache.lucene.queryparser.flexible.core.config.QueryConfigHandler;
> import org.apache.lucene.queryparser.flexible.core.nodes.QueryNode;
> import 
> org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessor;
> public class DiagnosticQueryNodeProcessor implements QueryNodeProcessor
> {
> private final QueryNodeProcessor delegate;
> public TreeFixingQueryNodeProcessor(QueryNodeProcessor delegate)
> {
> this.delegate = delegate;
> }
> @Override
> public QueryConfigHandler getQueryConfigHandler()
> {
> return delegate.getQueryConfigHandler();
> }
> @Override
> public void setQueryConfigHandler(QueryConfigHandler queryConfigHandler)
> {
> delegate.setQueryConfigHandler(queryConfigHandler);
> }
> @Override
> public QueryNode process(QueryNode queryNode) throws QueryNodeException
> {
> System.out.println("Before " + delegate.getClass().getSimpleName() + 
> ".process():");
> dumpTree(queryNode);
> queryNode = delegate.process(queryNode);
> System.out.println("After " + 

[jira] [Commented] (LUCENE-7260) StandardQueryParser is over 100 times slower in v5 compared to v3

2016-04-28 Thread Ivan Mamontov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262614#comment-15262614
 ] 

Ivan Mamontov commented on LUCENE-7260:
---

I do not recommend to use yourkit anywhere especially in microbenchmarks. 
According to JMC(-XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions 
-XX:+DebugNonSafepoints -XX:+FlightRecorder 
-XX:StartFlightRecording=duration=60s,filename=myrecording.jfr) the hottest 
method is 
org.apache.lucene.queryparser.flexible.core.nodes.QueryNodeImpl.removeChildren(QueryNode)

See details here https://issues.apache.org/jira/browse/LUCENE-5099

> StandardQueryParser is over 100 times slower in v5 compared to v3
> -
>
> Key: LUCENE-7260
> URL: https://issues.apache.org/jira/browse/LUCENE-7260
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 5.4.1
> Environment: Java 8u51
>Reporter: Trejkaz
>  Labels: performance
>
> The following test code times parsing a large query.
> {code}
> import org.apache.lucene.analysis.KeywordAnalyzer;
> //import org.apache.lucene.analysis.core.KeywordAnalyzer;
> import org.apache.lucene.queryParser.standard.StandardQueryParser;
> //import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
> import org.apache.lucene.search.BooleanQuery;
> public class LargeQueryTest {
> public static void main(String[] args) throws Exception {
> BooleanQuery.setMaxClauseCount(50_000);
> StringBuilder builder = new StringBuilder(50_000*10);
> builder.append("id:( ");
> boolean first = true;
> for (int i = 0; i < 50_000; i++) {
> if (first) {
> first = false;
> } else {
> builder.append(" OR ");
> }
> builder.append(String.valueOf(i));
> }
> builder.append(" )");
> String queryString = builder.toString();
> StandardQueryParser parser2 = new StandardQueryParser(new 
> KeywordAnalyzer());
> for (int i = 0; i < 10; i++) {
> long t0 = System.currentTimeMillis();
> parser2.parse(queryString, "nope");
> long t1 = System.currentTimeMillis();
> System.out.println(t1-t0);
> }
> }
> }
> {code}
> For Lucene 3.6.2, the timings settle down to 200~300 with the fastest being 
> 207.
> For Lucene 5.4.1, the timings settle down to 2~3 with the fastest 
> being 22444.
> So at some point, some change made the query parser 100 times slower. I would 
> suspect that it has something to do with how the list of children is now 
> handled. Every time someone gets the children, it copies the list. Every time 
> someone sets the children, it walks through to detach parent references and 
> then reattaches them all again.
> If it were me, I would probably make these collections immutable so that I 
> didn't have to defensively copy them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-28 Thread Cassandra Targett
Is it possible there are 2100 of these?

I did the below JIRA query, only in the Solr project, looking for
Resolved or Closed issues with fixVersion of "master", but not with
fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
date of Lucene/Solr 6).

https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22

(Obviously this misses Lucene issues, but I assume a similar strategy
would apply. Also, we may want to shift the date back to the cutting
of the 6_x branch.)

It seems it would be easier to make some sort of "rename master" sort
of change and go back and fix the ones that shouldn't be changed
because they have been finished post-6.0 release, but I'm not seeing a
good way to make a single query for those.

Additionally, and sadly, in JIRA any bulk update to a field overwrites
the existing value in the field. So if the fixVersion is "master" and
"5.3", then doing a bulk update to "master" only would remove "5.3".

So, yeah, what you guys said - it's not going to be super-easy.

On Wed, Apr 27, 2016 at 1:18 PM, Anshum Gupta  wrote:
> Should've replied to this thread ! I've been seeing those as part of the
> 5.5.1 back ports and it confuses me every now and then.
>
> It should've been handled with the 6.0 release so I wasn't sure how to
> handle those so I've been adding the 6.0 fix version to places where I've
> found them but I should've removed the 'master' tag.
> I'll help with the manual auditing and fixing of this once I have the 5.5.1
> RC1 later today.
>
> On Wed, Apr 27, 2016 at 9:59 AM, Chris Hostetter 
> wrote:
>>
>>
>> Wow ... ok ... so no responses / opinions other then miller, eh?
>>
>> Thats fine ... slience == compliance i guess.
>>
>> I don't see much choice at this point other then a bunch of manual clean
>> up.  I'll try to find some time to take a stab at this at some point in
>> the future i guess, not sure when.  I'll reply back to this thread if i
>> do, if anyone else beats me to it please reply here as well so we aren't
>> wasting eachothers time.
>>
>>
>>
>> : Date: Thu, 14 Apr 2016 00:15:11 +
>> : From: Mark Miller 
>> : Reply-To: dev@lucene.apache.org
>> : To: Lucene Dev 
>> : Subject: Re: post-branch_6x Jira version renaming(s) got overlooked?
>> :
>> : Yeah, sorry, I saw this too. People kept making 6 and 6.0 releases in
>> JIRA
>> : during 5x. A couple times I removed them because trunk or master is
>> : supposed to be renamed when we release. But those versions kept getting
>> : created again. I figured the rollover was not done right, but with no
>> other
>> : complaints I did not really look. Some people with JiRa admin power had
>> : different ideas.
>> : On Wed, Apr 13, 2016 at 6:40 PM Chris Hostetter
>> 
>> : wrote:
>> :
>> : >
>> : > I just noticed that most of the (older) jira's listed in 6.0's
>> CHANGES.txt
>> : > files are still showing up in Jira as being fixed in "master"
>> : >
>> : > Examples...
>> : >
>> : > https://issues.apache.org/jira/browse/LUCENE-5950
>> : > https://issues.apache.org/jira/browse/LUCENE-6631
>> : > https://issues.apache.org/jira/browse/SOLR-3085
>> : > https://issues.apache.org/jira/browse/SOLR-7560
>> : >
>> : > Only some of the more recent issues, that were resolved after
>> branch_6x /
>> : > (and/or branch_6_0) was created, thus people deliberately backported
>> : > and deliberately marked them as fixed in 6.0 have the newer "6.0" fix
>> : > version...
>> : >
>> : > https://issues.apache.org/jira/browse/LUCENE-7056
>> : > https://issues.apache.org/jira/browse/SOLR-8831
>> : >
>> : > my recollection is that part of the release process for creating a new
>> X.0
>> : > release is to rename the "master" version in Jira to "X.0" and re-add
>> a
>> : > new "master" version -- but it looks like that never happened for 6.0
>> (is
>> : > it not documented as part of the release process?) and insstead
>> entirely
>> : > new "6.0" jira versions were added.
>> : >
>> : > In any case: it seems like we now need to bulk edit *most* of the
>> : > issues currently labeled "Fixed: master" in both the LUCENE and SOLR
>> jira
>> : > projects, so they are "Fixed: 6.0" (i say *most* because obviously
>> we'll
>> : > need to audit the issues resolved & committed only to master after the
>> : > 6x branch was created and leave them alone) .. sound right?
>> : >
>> : > (we probably shouldn't remove/replace the existing "6.0" versions in
>> : > Jira, because we already have issues marked as "Affects: 6.0")
>> : >
>> : > Or am i completley missunderstanding the situation?
>> : >
>> : >
>> : > -Hoss
>> : > http://www.lucidworks.com/
>> : >
>> : > -
>> : > 

[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262580#comment-15262580
 ] 

ASF subversion and git services commented on SOLR-9027:
---

Commit 2c66d4b04619e5ac3ddf6984aacd833a62c33a29 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c66d4b ]

SOLR-9027: GraphTermsQuery optimizations and more explicit handling of 
non-caching behavior


> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262556#comment-15262556
 ] 

David Smiley commented on SOLR-9038:


bq. >>I presume by "snapshot", we're talking about named (or numbered) Lucene 
IndexCommit objects across all replicas of a Solr Collection? And then, in 
SOLR-5750 or future patch, the "backup" capability might optionally make 
reference to a named snapshot instead of just taking the last IndexCommit?
bq. Yes that is correct.

Yet I'm now unsure we're actually talking about the same thing, given 
everything else you've said.  If this issue you propose is anything more than 
adding commit metadata (copying segments to another place _is_ more than just 
adding commit metadata), then how is this issue different then SOLR-5750?  I 
understand we want to leverage storage level efficiencies (e.g. distcp) but 
this issue doesn't seem to actually be about that.  Its appearing to be 
redundant with SOLR-5750.  Or perhaps you mean, extend/enhance the result of 
SOLR-5750 so that we have an API to list & delete the backups without requiring 
a client  to go to the shared file system to observe what backups there are?  
Big +1 to that and if so please clarify the title/description and add a 
requires linkage to SOLR-5750.  Again if you mean that, then mentioning HDFS 
etc. is a distraction to this issue's purpose.

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262532#comment-15262532
 ] 

Steve Rowe edited comment on SOLR-9028 at 4/28/16 5:05 PM:
---

Rerunning on OS X, when I apply the latest patch on master and run {{ant 
-Dtestcase=TestMiniSolrCloudClusterSSL test}}, I get {{BUILD SUCCESSFUL}}.


was (Author: steve_rowe):
Rerunning on OS X, when I apply the latest patch on master and run {{ant 
-Dtestcase=TestMiniSolrCloudClusterSSL}}, I get {{BUILD SUCCESSFUL}}.

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262532#comment-15262532
 ] 

Steve Rowe commented on SOLR-9028:
--

Rerunning on OS X, when I apply the latest patch on master and run {{ant 
-Dtestcase=TestMiniSolrCloudClusterSSL}}, I get {{BUILD SUCCESSFUL}}.

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7263) xmlparser: Allow SpanQueryBuilder to be used by derived classes

2016-04-28 Thread Daniel Collins (JIRA)
Daniel Collins created LUCENE-7263:
--

 Summary: xmlparser: Allow SpanQueryBuilder to be used by derived 
classes
 Key: LUCENE-7263
 URL: https://issues.apache.org/jira/browse/LUCENE-7263
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/queryparser
Affects Versions: master
Reporter: Daniel Collins


Following on from LUCENE-7210 (and others), the xml queryparser has different 
factories, one for creating normal queries and one for creating span queries.

The former is a protected variable so can be used by derived classes, the 
latter isn't.

This makes the spanFactory a variable that can be used more easily.  No 
functional changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7260) StandardQueryParser is over 100 times slower in v5 compared to v3

2016-04-28 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262522#comment-15262522
 ] 

Trejkaz commented on LUCENE-7260:
-

Is there a faster way to do it? Keeping in mind that it has to be something 
starting from a query string, since that's what the user originally entered who 
reported the issue to us.

> StandardQueryParser is over 100 times slower in v5 compared to v3
> -
>
> Key: LUCENE-7260
> URL: https://issues.apache.org/jira/browse/LUCENE-7260
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 5.4.1
> Environment: Java 8u51
>Reporter: Trejkaz
>  Labels: performance
>
> The following test code times parsing a large query.
> {code}
> import org.apache.lucene.analysis.KeywordAnalyzer;
> //import org.apache.lucene.analysis.core.KeywordAnalyzer;
> import org.apache.lucene.queryParser.standard.StandardQueryParser;
> //import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
> import org.apache.lucene.search.BooleanQuery;
> public class LargeQueryTest {
> public static void main(String[] args) throws Exception {
> BooleanQuery.setMaxClauseCount(50_000);
> StringBuilder builder = new StringBuilder(50_000*10);
> builder.append("id:( ");
> boolean first = true;
> for (int i = 0; i < 50_000; i++) {
> if (first) {
> first = false;
> } else {
> builder.append(" OR ");
> }
> builder.append(String.valueOf(i));
> }
> builder.append(" )");
> String queryString = builder.toString();
> StandardQueryParser parser2 = new StandardQueryParser(new 
> KeywordAnalyzer());
> for (int i = 0; i < 10; i++) {
> long t0 = System.currentTimeMillis();
> parser2.parse(queryString, "nope");
> long t1 = System.currentTimeMillis();
> System.out.println(t1-t0);
> }
> }
> }
> {code}
> For Lucene 3.6.2, the timings settle down to 200~300 with the fastest being 
> 207.
> For Lucene 5.4.1, the timings settle down to 2~3 with the fastest 
> being 22444.
> So at some point, some change made the query parser 100 times slower. I would 
> suspect that it has something to do with how the list of children is now 
> handled. Every time someone gets the children, it copies the list. Every time 
> someone sets the children, it walks through to detach parent references and 
> then reattaches them all again.
> If it were me, I would probably make these collections immutable so that I 
> didn't have to defensively copy them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6966) Contribution: Codec for index-level encryption

2016-04-28 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262520#comment-15262520
 ] 

Renaud Delbru commented on LUCENE-6966:
---

Hi [~joel.bernstein],

{quote}
1) With the latest patch do you feel the major concerns have been addressed.
{quote}

Yes, the latest patch does not reuse IVs anymore but instead use a different IV 
for each data block. It also introduces an API so that one can have control on 
how IVs are generated and how the cipher is instantiated.

{quote}
2) From my initial reading of the patch it seemed like everything in the patch 
was pluggable. Does this need to be committed to be usable? Or can it be hosted 
on another project?

3) Because it's such a large patch and codecs change over time, does it present 
a burden to maintain with the core Lucene project? Along these lines is it more 
appropriate from a maintenance standpoint to be maintained by people who are 
really motivated to have this feature. Alfresco engineers would likely 
participate in an outside project if one existed.
{quote}

The patch follows the standard rules of Lucene codecs, so yes, it is fully 
pluggable. Similar to other codecs, however, the burden to maintain it will be 
low. It is a set of Lucene's *Format classes that are loosely coupled with 
other part of the Lucene code. It will likely require maintenance only when the 
high-level Lucene's Codec and Format API changes.
 
The patch is large because we had to make a copy of some of the original lucene 
*Format classes, as those classes were final and not extensible. If one wants 
to update them with the latest improvements made in the original classes, this 
might require a bit more effort, but from my personal experience it was so far 
straightforward.

> Contribution: Codec for index-level encryption
> --
>
> Key: LUCENE-6966
> URL: https://issues.apache.org/jira/browse/LUCENE-6966
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/other
>Reporter: Renaud Delbru
>  Labels: codec, contrib
> Attachments: LUCENE-6966-1.patch, LUCENE-6966-2.patch
>
>
> We would like to contribute a codec that enables the encryption of sensitive 
> data in the index that has been developed as part of an engagement with a 
> customer. We think that this could be of interest for the community.
> Below is a description of the project.
> h1. Introduction
> In comparison with approaches where all data is encrypted (e.g., file system 
> encryption, index output / directory encryption), encryption at a codec level 
> enables more fine-grained control on which block of data is encrypted. This 
> is more efficient since less data has to be encrypted. This also gives more 
> flexibility such as the ability to select which field to encrypt.
> Some of the requirements for this project were:
> * The performance impact of the encryption should be reasonable.
> * The user can choose which field to encrypt.
> * Key management: During the life cycle of the index, the user can provide a 
> new version of his encryption key. Multiple key versions should co-exist in 
> one index.
> h1. What is supported ?
> - Block tree terms index and dictionary
> - Compressed stored fields format
> - Compressed term vectors format
> - Doc values format (prototype based on an encrypted index output) - this 
> will be submitted as a separated patch
> - Index upgrader: command to upgrade all the index segments with the latest 
> key version available.
> h1. How it is implemented ?
> h2. Key Management
> One index segment is encrypted with a single key version. An index can have 
> multiple segments, each one encrypted using a different key version. The key 
> version for a segment is stored in the segment info.
> The provided codec is abstract, and a subclass is responsible in providing an 
> implementation of the cipher factory. The cipher factory is responsible of 
> the creation of a cipher instance based on a given key version.
> h2. Encryption Model
> The encryption model is based on AES/CBC with padding. Initialisation vector 
> (IV) is reused for performance reason, but only on a per format and per 
> segment basis.
> While IV reuse is usually considered a bad practice, the CBC mode is somehow 
> resilient to IV reuse. The only "leak" of information that this could lead to 
> is being able to know that two encrypted blocks of data starts with the same 
> prefix. However, it is unlikely that two data blocks in an index segment will 
> start with the same data:
> - Stored Fields Format: Each encrypted data block is a compressed block 
> (~4kb) of one or more documents. It is unlikely that two compressed blocks 
> start with the same data prefix.
> - Term Vectors: Each encrypted data block is a compressed block (~4kb) of 
> terms and payloads 

[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-28 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262483#comment-15262483
 ] 

Hrishikesh Gadre commented on SOLR-9038:


Hi [~dsmiley] thanks for the comments :)

>>I presume by "snapshot", we're talking about named (or numbered) Lucene 
>>IndexCommit objects across all replicas of a Solr Collection? And then, in 
>>SOLR-5750 or future patch, the "backup" capability might optionally make 
>>reference to a named snapshot instead of just taking the last IndexCommit?

Yes that is correct.

 >>And in some separate issue, a rollback ability, I presume.

I am thinking to use "restore" capability for this (SOLR-5750). The idea here 
is that if the "snapshot" needs to restored, it should be exported to a 
separate location (Exported snapshot is equivalent to a backup). Since 
"rollback" would be less frequent than snapshot "creation", it should be 
acceptable to use the "restore" work-flow even if it is less efficient for 
simplicity and uniformity. But we can always revisit this if there are 
use-cases.

>>Perhaps another way to view this feature proposed here is to have a commit 
>>optionally include a persistent name (or variable name-value metadata for 
>>that matter) that will be included with the IndexCommit that is persisted. 
>>That would be a somewhat simple way to think of this feature, and needn't 
>>involve any SolrCloud related stuff. Of course this data would need to 
>>flow-through in all the places commit boolean does, which is a lot of places, 
>>but I don't think it would be hard/complicated.

I am thinking to define new APIs at collection and core level 
(CREATESNAPSHOT/DELETESNAPSHOT/LISTSNAPSHOTS). The collection level 
"CREATESNAPSHOT" operation would be implemented in the Overseer (just like 
BACKUP/RESTORE). The only difference  is that it would invoke core level 
"CREATESNAPSHOT" API for each of the shard leader replica (instead of BACKUP 
API). It will also copy the ZK configuration at the specified location.

Once the snapshot is created for an index commit, the corresponding files will 
be available for download. This download can be implemented without going 
through the Overseer. e.g.
-> If Solr is running on a Hadoop/HDFS cluster, we can use distcp tool to copy 
the files.
-> We can use replication handler functionality to copy the files (This can be 
wrapped as a Solr API or a command line tool).

I am not quite sure if we utilize the "commit" workflow for snapshot creation, 
how would we capture the collection metadata?

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> 

[jira] [Resolved] (LUCENE-7261) Speed up LSBRadixSorter

2016-04-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7261.
--
   Resolution: Fixed
Fix Version/s: 6.1
   master

> Speed up LSBRadixSorter
> ---
>
> Key: LUCENE-7261
> URL: https://issues.apache.org/jira/browse/LUCENE-7261
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: LUCENE-7261.patch
>
>
> Currently it always does 4 passes over the data (one per byte, since ints 
> have 4 bytes). However, most of the time, we know {{maxDoc}}, so we can use 
> this information to do fewer passes when they are not necessary. For 
> instance, if maxDoc is less than or equal to 2^24, we only need 3 passes, and 
> if maxDoc is less than or equals to 2^16, we only need two passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7261) Speed up LSBRadixSorter

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262446#comment-15262446
 ] 

ASF subversion and git services commented on LUCENE-7261:
-

Commit 8ca6f6651ede19bfaee9051e9b87927685cb9be0 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8ca6f66 ]

LUCENE-7261: Speed up LSBRadixSorter.


> Speed up LSBRadixSorter
> ---
>
> Key: LUCENE-7261
> URL: https://issues.apache.org/jira/browse/LUCENE-7261
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7261.patch
>
>
> Currently it always does 4 passes over the data (one per byte, since ints 
> have 4 bytes). However, most of the time, we know {{maxDoc}}, so we can use 
> this information to do fewer passes when they are not necessary. For 
> instance, if maxDoc is less than or equal to 2^24, we only need 3 passes, and 
> if maxDoc is less than or equals to 2^16, we only need two passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7261) Speed up LSBRadixSorter

2016-04-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262447#comment-15262447
 ] 

ASF subversion and git services commented on LUCENE-7261:
-

Commit ef45d4b2e1f9c967b62340acb027f50888a00ba2 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef45d4b ]

LUCENE-7261: Speed up LSBRadixSorter.


> Speed up LSBRadixSorter
> ---
>
> Key: LUCENE-7261
> URL: https://issues.apache.org/jira/browse/LUCENE-7261
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7261.patch
>
>
> Currently it always does 4 passes over the data (one per byte, since ints 
> have 4 bytes). However, most of the time, we know {{maxDoc}}, so we can use 
> this information to do fewer passes when they are not necessary. For 
> instance, if maxDoc is less than or equal to 2^24, we only need 3 passes, and 
> if maxDoc is less than or equals to 2^16, we only need two passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.1

2016-04-28 Thread Anshum Gupta
Too much going on! Thanks Yonik.
I'll start working on the RC now.

NOTE: Please don't back port any more issues right now. In case of
exceptions, please raise them here.

On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:

> On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
> wrote:
> > Thanks. I'm waiting for the last back port of SOLR-8865.
>
> It should be already be there... I closed it yesterday.
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


[jira] [Comment Edited] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262381#comment-15262381
 ] 

Joel Bernstein edited comment on SOLR-9027 at 4/28/16 4:22 PM:
---

[~dsmiley], I've been working on the changes you proposed. I dug into how the 
TermContext is being used elsewhere in Lucene. What I found was that the 
TermQuery is holding onto the TermContext and seems to be relying on wrapper 
queries to manage it properly. This is marked as an expert usage. The 
CommonTermsQuery uses this constructor. So it does appear that holding onto the 
TermContext is OK, as long as it's handled properly. So I'll review just to 
make sure the TermContexts are always regenerated when the query is run and 
continue to hold onto it within the query.


was (Author: joel.bernstein):
[~dsmiley], I've been working on the changes you proposed. I dug into how to 
the TermContext is being used elsewhere in Lucene. What I found was that the 
TermQuery is holding onto the TermContext and seems to be relying on wrapper 
queries to manage it properly. This is marked as an expert usage. The 
CommonTermsQuery uses this constructor. So it does appear that holding onto the 
TermContext is OK, as long as it's handled properly. So I'll review just to 
make sure the TermContexts are always regenerated when the query is run and 
continue to hold onto it within the query.

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >