Re: Re: Re: potential accuracy degradation due to approximation of document length in BM25 (and other similarities)

2016-07-08 Thread David Smiley
I agree that using one byte by default is questionable on modern machines
and given common text field sizes as well. I think my understanding of how
norms are encoding/accessed may be wrong from what I had said.
Lucene53NormsFormat supports Long, I see, and it's clever about observing
the max bytes-per-value needed.  No need for some new format.  It's the
Similarity impls (BM25 is one but others do this too) that choose to encode
a smaller value.  It would be nice to have this be toggle-able!  Maybe just
a boolean flag?

On Thu, Jul 7, 2016 at 9:52 PM Leo Boytsov  wrote:

> Hi David,
>
> thank you for picking it up. Now we are having a more meaningful
> discussion regarding the "waste".
>
> Leo,
>> There may be confusion here as to where the space is wasted.  1 vs 8 bytes
>> per doc on disk is peanuts, sure, but in RAM it is not and that is the
>> concern.  AFAIK the norms are memory-mapped in, and we need to ensure it's
>> trivial to know which offset to go to on disk based on a document id,
>> which
>> precludes compression but maybe you have ideas to improve that.
>>
>
> First, my understanding is that all essential parts of the Lucene index
> are memory mapped, in particular, the inverted index (in the most common
> scenario at least). Otherwise, the search performance is miserable. That
> said, memory mapping a few extra bytes per document shouldn't make a
> noticeable difference.
>
> Also, judging by the code in the class Lucene53NormsProducer and a debug
> session, Lucene only maps a compressed segment containing norm values.
> Norms are stored using 1,2,4, or 8 bytes. They are uncompressed into an
> 8-byte long. This is probably on a per-slice basis.
>
> Anyways, situations in which you will get more than 65536 words per
> document are quite rare. Situations with documents having 4 billion words
> (or more) are exotic. If you have such enormous documents, again, saving on
> document normalization factors won't be your first priority. You would
> probably think about the ways of splitting such a huge document containing
> every possible keyword into something more manageable.
>
> To sum up,  for 99.999% of the users squeezing normalization factors into
> a single byte has absolutely no benefit. Memoization do seem to speed up
> things a bit, but I suspect this may disappear with new generations of CPUs.
>
>
>> To use your own norms encoding, see Codec.normsFormat.  (disclaimer: I
>> haven't used this but I know where to look)
>>
>
> Ok, thanks.
>
>
>>
>> ~ David
>>
>> On Wed, Jul 6, 2016 at 5:31 PM Leo Boytsov  wrote:
>>
>> > Hi,
>> >
>> > for some reason I didn't get a reply from the mailing list directly, so
>> I
>> > have to send a new message. I appreciate if something can be fixed, so
>> that
>> > I get a reply as well.
>> >
>> > First of all, I don't buy the claim about the issue being well-known. I
>> > would actually argue that nobody except a few Lucene devs know about it.
>> > There is also a bug in Lucene's tutorial example. This needs to be
>> fixed as
>> > well.
>> >
>> > Neither do I find your arguments convincing. In particular, I don't
>> think
>> > that there is any serious waste of space. Please, see my detailed
>> comments
>> > below. Please, note that I definitely don't know all the internals
>> well, so
>> > I appreciate if you could explain them better.
>> >
>> > The downsides are documented and known. But I don't think you are
>> >> fully documenting the tradeoffs here, by encoding up to a 64-bit long,
>> >> you can use up to *8x more memory and disk space* than what lucene
>> >> does by default, and that is per-field.
>> >
>> >
>> > This is not true. First of all, the increase is only for the textual
>> > fields. Simple fields like Integers don't use normalization factors. So,
>> > there is no increase for them.
>> >
>>
> > In the worst case, you will have 7 extra bytes for a *text* field.
>
>
>> > However, this is not an 8x increase.
>> >
>>
> > If you do *compress* the length of the text field, then its size will
>
>
>> > depend on the size of the text field. For example, one extra byte will
>> be
>> > required for fields that contain
>> > more than 256 words, two extra bytes for fields having more than 65536
>>
> > words, and so on so forth. *Compared to the field sizes, a several byte*
>> > increase is simply *laughable*.
>> >
>> > If Lucene saves normalization factor *without compression, *it should
>> now
>
>
>> > use 8 bytes already. So, storing the full document length won't make a
>> > difference.
>> >
>> >
>> >> So that is a real trap. Maybe
>> >> throw an exception there instead if the boost != 1F (just don't
>> >> support it), and add a guard for "supermassive" documents, so that at
>> >> most only 16 bits are ever used instead of 64. The document need not
>> >> really be massive, it can happen just from a strange analysis chain
>> >> (n-grams etc) that you get large values here.
>> >>
>> >
>> > As mentioned above, storing a few 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 307 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/307/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([E16D4D9060A2E501:6939724ACE5E88F9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:209)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10959 lines...]
   [junit4] 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5970 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5970/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:57329/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-master-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_CAD71CBCDA11D0A6-001/tempDir-002/mytestbackup/

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:57329/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-master-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_CAD71CBCDA11D0A6-001/tempDir-002/mytestbackup/
at 
__randomizedtesting.SeedInfo.seed([CAD71CBCDA11D0A6:4283236674EDBD5E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1270)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:207)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+125) - Build # 1097 - Failure!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1097/
Java: 64bit/jdk-9-ea+125 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40246/c8n_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40246/c8n_1x3_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([D19D3980432EB27F:59C9065AEDD2DF87]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:713)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:592)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:578)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf3(HttpPartitionTest.java:380)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:114)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 254 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/254/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([816F4AFD5116F47F]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:344)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:693)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([816F4AFD5116F47F]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:344)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:693)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:250)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 114 - Still Failing

2016-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/114/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
10 threads leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=127953, 
name=SolrConfigHandler-refreshconf, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2544)   
  at org.apache.solr.core.SolrCore$$Lambda$87/2045001724.run(Unknown 
Source) at 
org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:218)
2) Thread[id=127848, name=Thread-92749, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2544)   
  at org.apache.solr.core.SolrCore$$Lambda$87/2045001724.run(Unknown 
Source) at 
org.apache.solr.cloud.ZkController$4.run(ZkController.java:2445)3) 
Thread[id=127609, name=SolrConfigHandler-refreshconf, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2544)   
  at org.apache.solr.core.SolrCore$$Lambda$87/2045001724.run(Unknown 
Source) at 
org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:218)
4) Thread[id=127080, name=SolrConfigHandler-refreshconf, 
state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2544)   
  at org.apache.solr.core.SolrCore$$Lambda$87/2045001724.run(Unknown 
Source) at 
org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:218)
5) Thread[id=128650, name=SolrConfigHandler-refreshconf, 
state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 

[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368672#comment-15368672
 ] 

Hoss Man commented on SOLR-9163:


bq. The schemas should be exactly the same now (except for the copyField).

Except that one of them (data_driven_schema) supports adding field 
automaticaly, while the other (basic) does not -- so a bunch of commented out 
hunks of solrconfig.xml that give examples of how to do something with a 
"price" field is viable in a data_driven_schema config set, but nonsensical in 
the basic configs set.

bq. Shouldn't schemaless just be about enabling that one feature?

yes, but:
# there is a lot of configuration involved in supporting a data_driven_schema 
collection (the various updated processors and what not) that are now 
cluttering up the "basic" configs 
# that sounds like a reason to *delete* commented out sample cruft from 
data_driven, not add it to basic_configs...

bq. FWIW, I'd be +1 on removing a lot of the cruft from both of the configs ...

+1

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9279) Add greater than, less than, etc in Solr function queries

2016-07-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368659#comment-15368659
 ] 

Hoss Man commented on SOLR-9279:


I didn't look at the pull request in depth, but in general i applaud the idea 
and the general approach from skimming the issue description and 
CompareNumericFunction.java

3 things that did jump out at me when skimming the patch as a whole: 
* i see edits to ValueSourceParser.java but no edits to QueryEqualityTest.java 
... that gives me 99% confidence that this patch breaks QueryEqualityTest
* I see tests of using these new functions when wrapped in {{if(...)}} but no 
(obvious to me) tests of these new functions being used directly for their 
return value -- ex: {{fl=id,gte(price,0)}} -- and demonstrating what the 
expected result should be
** in this example, i would expect the result type to be a Boolean (ie: {{true}}
** Since i don't see {{FunctionValues.objectVal}} overridden anywhere in the 
patch, i'm assuming this doesn't work as I expect
* I don't see {{FunctionValues.exists}} overridden anywhere in this patch, 
which IIRC means these functions are always going to "exists=true", which does 
not seem like a good hardcoded behavior for a ValueSource thta wraps other 
ValueSources.
** some thought/javadocs should be given to how exactly these functions should 
behave if/when one or more of the ValueSources they wrap do not exist for a 
given document -- and some tests demonstrating the expected behavior in these 
situations seem crucial.
** see LUCENE-5961, and the core premise expressed in the first comment on that 
issue, which seems just as relevant to me for this issue.
** I would suggest implementing {{exists}} using 
{{MultiFunction.allExists(...)}}, that way callers can decide for themselves 
how it should behave, by wrapping the inner ValueSources in DefFunction, and/or 
by wrapping their CompareNumericFunction in a DefFunction as they see fit.

> Add greater than, less than, etc in Solr function queries
> -
>
> Key: SOLR-9279
> URL: https://issues.apache.org/jira/browse/SOLR-9279
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Doug Turnbull
> Fix For: master (7.0)
>
>
> If you use the "if" function query, you'll often expect to be able to use 
> greater than/less than functions. For example, you might want to boost books 
> written in the past 7 years. Unfortunately, there's no "greater than" 
> function query that will return non-zero when the lhs > rhs. Instead to get 
> this, you need to create really awkward function queries like I do here 
> (http://opensourceconnections.com/blog/2014/11/26/stepwise-date-boosting-in-solr/):
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> The pull request attached to this Jira adds the following function queries
> (https://github.com/apache/lucene-solr/pull/49)
> -gt(lhs, rhs) (returns 1 if lhs > rhs, 0 otherwise)
> -lt(lhs, rhs) (returns 1 if lhs < rhs, 0 otherwise)
> -gte
> -lte
> -eq
> So instead of 
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> one could now write
> if(lt(ms(mydatefield),315569259747,0.8,1)
> (if mydatefield < 315569259747 then 0.8 else 1)
> A bit more readable and less puzzling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17191 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17191/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at https://127.0.0.1:35925/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35925/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([3CECFD6A4239A8E9:B4B8C2B0ECC5C511]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1270)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
 

[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368633#comment-15368633
 ] 

Yonik Seeley commented on SOLR-9163:


FWIW, I'd be +1 on removing a lot of the cruft from *both* of the configs (and 
like I said, ideally just merging them and having a simple switch to turn 
on/off schemaless).

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368611#comment-15368611
 ] 

Yonik Seeley commented on SOLR-9163:


bq. commented out example stuff, most of which refers to fields that don't even 
exist in the basic_configs schema 

The schema's should be exactly the same now (except for the copyField).

bq. FWIW i think making "basic_configs" bigger [...]  is a bad idea.

I sort of had the same thought when syncing these up... but I modeled the basic 
after the schemaless (instead of vice-versa) because schemaless is what you get 
by default when you create a core, and I didn't want to go breaking examples in 
documentation.

bq. The intent behind basic_configs was to be just that: a very basic set of 
configs.

Shouldn't schemaless just be about enabling that one feature?



> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368611#comment-15368611
 ] 

Yonik Seeley edited comment on SOLR-9163 at 7/8/16 10:38 PM:
-

bq. commented out example stuff, most of which refers to fields that don't even 
exist in the basic_configs schema 

The schemas should be exactly the same now (except for the copyField).

bq. FWIW i think making "basic_configs" bigger [...]  is a bad idea.

I sort of had the same thought when syncing these up... but I modeled the basic 
after the schemaless (instead of vice-versa) because schemaless is what you get 
by default when you create a core, and I didn't want to go breaking examples in 
documentation.

bq. The intent behind basic_configs was to be just that: a very basic set of 
configs.

Shouldn't schemaless just be about enabling that one feature?




was (Author: ysee...@gmail.com):
bq. commented out example stuff, most of which refers to fields that don't even 
exist in the basic_configs schema 

The schema's should be exactly the same now (except for the copyField).

bq. FWIW i think making "basic_configs" bigger [...]  is a bad idea.

I sort of had the same thought when syncing these up... but I modeled the basic 
after the schemaless (instead of vice-versa) because schemaless is what you get 
by default when you create a core, and I didn't want to go breaking examples in 
documentation.

bq. The intent behind basic_configs was to be just that: a very basic set of 
configs.

Shouldn't schemaless just be about enabling that one feature?



> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368599#comment-15368599
 ] 

Hoss Man commented on SOLR-9163:


I didn't notice this Jira until after yonik's commits.

FWIW i think making "basic_configs" bigger -- particularly with so much 
commented out example stuff, most of which refers to fields that don't even 
exist in the basic_configs schema  -- is a bad idea.

The intent behind basic_configs was to be just that: a very basic set of 
configs. now instead of 2 large, kitchen-sink-esque, configsets 
(sample_techproducts and data_driven) we have 3 ... that doesn't feel like 
progress.

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [1/2] lucene-solr:master: Added tests.awaitsfix to properties passed to forked JVMs in tests. Added a little info about tests.filter to test-help.

2016-07-08 Thread Chris Hostetter

Thanks dawid! ... beat me to it.

: Date: Thu, 07 Jul 2016 08:16:44 -
: From: dwe...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: [1/2] lucene-solr:master: Added tests.awaitsfix to properties passed
: to forked JVMs in tests. Added a little info about tests.filter to
: test-help.
: 
: Repository: lucene-solr
: Updated Branches:
:   refs/heads/branch_6x 4921dcd80 -> 4bcda43fd
:   refs/heads/master f1528bf33 -> f61a5f27d
: 
: 
: Added tests.awaitsfix to properties passed to forked JVMs in tests. Added a 
little info about tests.filter to test-help.
: 
: 
: Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
: Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/f61a5f27
: Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/f61a5f27
: Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/f61a5f27
: 
: Branch: refs/heads/master
: Commit: f61a5f27d23301c6f3f943907f8dc8c22a863e4e
: Parents: f1528bf
: Author: Dawid Weiss 
: Authored: Thu Jul 7 10:14:58 2016 +0200
: Committer: Dawid Weiss 
: Committed: Thu Jul 7 10:15:40 2016 +0200
: 
: --
:  lucene/common-build.xml | 16 
:  1 file changed, 16 insertions(+)
: --
: 
: 
: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f61a5f27/lucene/common-build.xml
: --
: diff --git a/lucene/common-build.xml b/lucene/common-build.xml
: index 0e588c6..1820e00 100644
: --- a/lucene/common-build.xml
: +++ b/lucene/common-build.xml
: @@ -1068,6 +1068,7 @@
:  
:  
:  
: +
:  
:  
:  
: @@ -1293,6 +1294,21 @@ ant -Dtests.weekly=[false]- weekly tests (@Weekly)
:  ant -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)
:  ant -Dtests.slow=[true]   - slow tests (@Slow)
:  
: +# An alternative way to select just one (or more) groups of tests
: +# is to use the -Dtests.filter property:
: +
: +-Dtests.filter="@slow"
: +
: +# would run only slow tests. 'tests.filter' supports Boolean operators
: +# 'and, or, not' and grouping, for example:
: +
: +ant -Dtests.filter="@nightly and not(@awaitsfix or @slow)"
: +
: +# would run nightly tests but not those also marked as awaiting a fix
: +# or slow. Note that tests.filter, if present, has a priority over any
: +# individual tests.* properties.
: +
: +
:  #
:  # Load balancing and caches. --
:  #
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9241) Rebalance API for SolrCloud

2016-07-08 Thread Nitin Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368480#comment-15368480
 ] 

Nitin Sharma commented on SOLR-9241:


[~noblepaul] Let me know if you would want to split this feature wise? (A 
separate patch for every scaling strategy ?) Kindly advise. 

> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: https://issues.apache.org/jira/browse/SOLR-9241
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.6.1
> Environment: Ubuntu, Mac OsX
>Reporter: Nitin Sharma
>  Labels: Cluster, SolrCloud
> Fix For: 4.6.1
>
> Attachments: rebalance.diff
>
>   Original Estimate: 2,016h
>  Remaining Estimate: 2,016h
>
> This is the v1 of the patch for Solrcloud Rebalance api (as described in 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
> Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to 
> provide a zero downtime mechanism to perform data manipulation and  efficient 
> core allocation in solrcloud. This API was envisioned to be the base layer 
> that enables Solrcloud to be an auto scaling platform. (and work in unison 
> with other complementing monitoring and scaling features).
> Patch Status:
> ===
> The patch is work in progress and incremental. We have done a few rounds of 
> code clean up. We wanted to get the patch going first to get initial feed 
> back.  We will continue to work on making it more open source friendly and 
> easily testable.
>  Deployment Status:
> 
> The platform is deployed in production at bloomreach and has been battle 
> tested for large scale load. (millions of documents and hundreds of 
> collections).
>  Internals:
> =
> The internals of the API and performance : 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/
> It is built on top of the admin collections API as an action (with various 
> flavors). At a high level, the rebalance api provides 2 constructs:
> Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
> options which can be reviewed in the api spec.
> Re-distribute  - Move around data in the cluster based on capacity/allocation.
> Auto Shard  - Dynamically shard a collection to any size.
> Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
> into smaller one.  (the source should be divisible by destination)
> Scale up -  Add replicas on the fly
> Scale Down - Remove replicas on the fly
> Allocation Strategy:  Decides where to put the data.  (Nodes with least 
> cores, Nodes that do not have this collection etc). Custom implementations 
> can be built on top as well. One other example is Availability Zone aware. 
> Distribute data such that every replica is placed on different availability 
> zone to support HA.
>  Detailed API Spec:
> 
>   https://github.com/bloomreach/solrcloud-rebalance-api
>  Contributors:
> =
>   Nitin Sharma
>   Suruchi Shah
>  Questions/Comments:
> =
>   You can reach me at nitin.sha...@bloomreach.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+125) - Build # 17190 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17190/
Java: 32bit/jdk-9-ea+125 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:41885/solr/testSolrCloudCollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException 
occured when talking to server at: 
http://127.0.0.1:41885/solr/testSolrCloudCollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([A14BDF5BF7A5FBA6:9C937177CF4BA5D6]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (SOLR-9280) make nodeName a configurable parameter in solr.xml

2016-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-9280:
---

Assignee: Shalin Shekhar Mangar

> make nodeName a configurable parameter in solr.xml
> --
>
> Key: SOLR-9280
> URL: https://issues.apache.org/jira/browse/SOLR-9280
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>Assignee: Shalin Shekhar Mangar
>
> Originally node name is automatically generated based on 
> {{:_}}. Instead it should be configurable in solr.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9280) make nodeName a configurable parameter in solr.xml

2016-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368343#comment-15368343
 ] 

ASF GitHub Bot commented on SOLR-9280:
--

Github user kelaban commented on the issue:

https://github.com/apache/lucene-solr/pull/50
  
Randomly setting nodeName in tests which extend from 
`AbstractFullDistribZkTestBase` hits about 60 solr cloud tests. But there are 
also about 30 other cloud tests which use `MiniSolrCloudCluster` which will  
not get this randomness applied. To increase coverage we can add randomness 
into this class as well.


> make nodeName a configurable parameter in solr.xml
> --
>
> Key: SOLR-9280
> URL: https://issues.apache.org/jira/browse/SOLR-9280
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>
> Originally node name is automatically generated based on 
> {{:_}}. Instead it should be configurable in solr.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #50: SOLR-9280 - make nodeName a configurable parameter in...

2016-07-08 Thread kelaban
Github user kelaban commented on the issue:

https://github.com/apache/lucene-solr/pull/50
  
Randomly setting nodeName in tests which extend from 
`AbstractFullDistribZkTestBase` hits about 60 solr cloud tests. But there are 
also about 30 other cloud tests which use `MiniSolrCloudCluster` which will  
not get this randomness applied. To increase coverage we can add randomness 
into this class as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9163:
---
Fix Version/s: 6.2

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Fix For: 6.2
>
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9163.

Resolution: Fixed

Committed.

The duplication is a shame though...
Longer term it feels like we should further collapse the two config-sets into 
one and have some sort of simple runtime switch for "schemaless"

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368291#comment-15368291
 ] 

ASF subversion and git services commented on SOLR-9163:
---

Commit 1a53346c0e33956d0b568a78e8a3753bc58789c5 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a53346 ]

SOLR-9163: sync basic_configs w/ data_driven_schema_configs


> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9280) make nodeName a configurable parameter in solr.xml

2016-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368288#comment-15368288
 ] 

ASF GitHub Bot commented on SOLR-9280:
--

GitHub user kelaban opened a pull request:

https://github.com/apache/lucene-solr/pull/50

SOLR-9280 - make nodeName a configurable parameter in solr.xml

This patch makes live nodeName configurable via solr.xml by setting
```xml


${solr.nodeName:}
 

```

To test, I randomly set nodeName in `AbstractFullDistribZkTestBase` for 
complete coverage.

I've gotten all tests to pass both when ALWAYS using a random nodeName and 
with it being set randomly. However during one particular run I got a test 
failure for the following:
`ant test  -Dtestcase=TestReqParamsAPI -Dtests.method=test 
-Dtests.seed=391BC4715DE8C2FE -Dtests.slow=true -Dtests.locale=pl-PL 
-Dtests.timezone=Asia/Chungking -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8` but am unable to reproduce the issue in eclipse 
and at this time am not sure if its related



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kelaban/lucene-solr jira/master/SOLR-9280

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/50.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #50


commit b1de1f9ef97a5957747f7fcb9ac9e7ce4acfac84
Author: Keith Laban 
Date:   2016-06-30T15:12:16Z

SOLR-9280 - make nodeName a configurable parameter in solr.xml




> make nodeName a configurable parameter in solr.xml
> --
>
> Key: SOLR-9280
> URL: https://issues.apache.org/jira/browse/SOLR-9280
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>
> Originally node name is automatically generated based on 
> {{:_}}. Instead it should be configurable in solr.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #50: SOLR-9280 - make nodeName a configurable param...

2016-07-08 Thread kelaban
GitHub user kelaban opened a pull request:

https://github.com/apache/lucene-solr/pull/50

SOLR-9280 - make nodeName a configurable parameter in solr.xml

This patch makes live nodeName configurable via solr.xml by setting
```xml


${solr.nodeName:}
 

```

To test, I randomly set nodeName in `AbstractFullDistribZkTestBase` for 
complete coverage.

I've gotten all tests to pass both when ALWAYS using a random nodeName and 
with it being set randomly. However during one particular run I got a test 
failure for the following:
`ant test  -Dtestcase=TestReqParamsAPI -Dtests.method=test 
-Dtests.seed=391BC4715DE8C2FE -Dtests.slow=true -Dtests.locale=pl-PL 
-Dtests.timezone=Asia/Chungking -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8` but am unable to reproduce the issue in eclipse 
and at this time am not sure if its related



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kelaban/lucene-solr jira/master/SOLR-9280

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/50.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #50


commit b1de1f9ef97a5957747f7fcb9ac9e7ce4acfac84
Author: Keith Laban 
Date:   2016-06-30T15:12:16Z

SOLR-9280 - make nodeName a configurable parameter in solr.xml




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368286#comment-15368286
 ] 

ASF subversion and git services commented on SOLR-9163:
---

Commit 67b638880d81fbb11abfbfc1ec93a5f3d86c3d3b in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=67b6388 ]

SOLR-9163: sync basic_configs w/ data_driven_schema_configs


> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368262#comment-15368262
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 7/8/16 7:27 PM:


{quote}
But there's a fundemental difference betwen params like commitWithin and 
overwrite and the new prevVersion param...

commitWithin and overwrite are client specified options specific to the 
xml/javabin update format(s). The fact that they can be specified as request 
params is an implementation detail of the xml/javabin formats that they happen 
to have in common, but are not exclusively specifyied as params – for example 
the XMLLoader only uses the params as defaults, they can be psecified on a per 
 basis.

The new prevVersion param however is an implementation detail of DUP ... DUP is 
the only code that should have to know/care that prevVersion comes from a 
request param.
{quote}
Sure, it makes sense. I'll fix it.

bq. We should have a comment to these affects (literally we could just paste 
that text directly into a comment) when declaring the prevPointer variable in 
this method.
I had put this comment there:
{code}
@return If cmd is an in-place update, then returns the pointer (in the tlog) of 
the previous update that the given update depends on. Returns -1 if this is not 
an in-place update, or if we can't find a previous entry in the tlog.
{code} 
But now I have updated it to make it even more detailed:
{code}
@return If cmd is an in-place update, then returns the pointer (in the tlog) of 
the previous update that the given update depends on.
   *Returns -1 if this is not an in-place update, or if we can't find a 
previous entry in the tlog. Upon receiving a -1, it 
   *should be clear why it was -1: if the command's 
flags|UpdateLog.UPDATE_INPLACE is set, then this
   *command is an in-place update whose previous update is in the index 
and not in the tlog; if that flag is not set, it is not an in-place
   *update at all, and don't bother about the prevPointer value at all 
(which is -1 as a dummy value).)
{code}

{quote}
Hmm... that makes me wonder – we should make sure we have a test case of doing 
atomic updates on numeric dv fields which have copyfields to other numeric 
fields. ie: lets make sure our "is this a candidate for inplace updates" takes 
into acount that the value being updated might need copied to another field.

(in theory if both the source & dest of the copy field are single valued dv 
only then we can still do the in place updated as long as the copyField 
happens, but even if we don't have that extra bit of logic we need a test that 
the udpates are happening consistently)
{quote}
Sure, I'll add such a test. The latest patch incorporates the behaviour you 
suggested: if any of the copy field targets is not a in-place updateable field, 
then the entire operation is not an in-place update (but a traditional atomic 
update instead). But, if copy field targets of an updated field is also 
supported for an updateable dv, then it is updated as well.

{quote}
Hmmm... is that really the relevant question though?

I'm not sure how the existing (non-inplace) atomic update code behaves if you 
try to "inc" a date, but why does it matter for the isSupportedForInPlaceUpdate 
method?

if date "inc" is supported in the existing atomic update code, then 
whatever that code path looks like (to compute the new value) it should be the 
same for the new inplace update code.
if date "inc" is not supported in the existing atomic update code, then 
whatever the error is should be the same in the new inplace update code

Either way, I don't see why isSupportedForInPlaceUpdate should care – or if it 
is going to care, then it should care about the details (ie: return false for 
(dv only) date field w/ "inc", but true for (dv only) date field with "set")
{quote}

For now I've removed date field totally out of scope of this patch. If there is 
a update to date that is needed, it falls back to traditional atomic update. I 
can try to deal with the trie date field, if you suggest.

bq. let's put those details in a comment where this Exception is thrown ... or 
better yet, try to incorporate it into the Exception msg?
I had put this exception in the patch: 
{{Unable to resolve the last full doc in tlog fully, and document not found in 
index even after opening new rt searcher.}} 
but now I'll change it to: 
{{Unable to resolve the last full doc in tlog fully, and document not found in 
index even after opening new rt searcher. If the doc was deleted, then there 
shouldn't have been an attempt to resolve to a previous document by that id.}}

bq. Ah, ok ... good point – can we go ahead and add some javadocs to that 
method as well making that clear?
Sure, I'll update the javadocs for that existing method as well.


was (Author: ichattopadhyaya):
{quote}
But there's 

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368262#comment-15368262
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


{quote}
But there's a fundemental difference betwen params like commitWithin and 
overwrite and the new prevVersion param...

commitWithin and overwrite are client specified options specific to the 
xml/javabin update format(s). The fact that they can be specified as request 
params is an implementation detail of the xml/javabin formats that they happen 
to have in common, but are not exclusively specifyied as params – for example 
the XMLLoader only uses the params as defaults, they can be psecified on a per 
 basis.

The new prevVersion param however is an implementation detail of DUP ... DUP is 
the only code that should have to know/care that prevVersion comes from a 
request param.
{quote}
Sure, it makes sense. I'll fix it.

bq. We should have a comment to these affects (literally we could just paste 
that text directly into a comment) when declaring the prevPointer variable in 
this method.
I had put this comment there:
{code}
@return If cmd is an in-place update, then returns the pointer (in the tlog) of 
the previous update that the given update depends on. Returns -1 if this is not 
an in-place update, or if we can't find a previous entry in the tlog.
{code} 
But now I have updated it to make it even more detailed:
{code}
@return If cmd is an in-place update, then returns the pointer (in the tlog) of 
the previous update that the given update depends on.
   *Returns -1 if this is not an in-place update, or if we can't find a 
previous entry in the tlog. Upon receiving a -1, it 
   *should be clear why it was -1: if the command's 
flags|UpdateLog.UPDATE_INPLACE is set, then this
   *command is an in-place update whose previous update is in the index 
and not in the tlog; if that flag is not set, it is not an in-place
   *update at all, and don't bother about the prevPointer value at all 
(which is -1 as a dummy value).)
{code}

{quote}
Hmm... that makes me wonder – we should make sure we have a test case of doing 
atomic updates on numeric dv fields which have copyfields to other numeric 
fields. ie: lets make sure our "is this a candidate for inplace updates" takes 
into acount that the value being updated might need copied to another field.

(in theory if both the source & dest of the copy field are single valued dv 
only then we can still do the in place updated as long as the copyField 
happens, but even if we don't have that extra bit of logic we need a test that 
the udpates are happening consistently)
{quote}
Sure, I'll add such a test. The latest patch incorporates the behaviour you 
suggested: if any of the copy field targets is not a in-place updateable field, 
then the entire operation is not an in-place update (but a traditional atomic 
update instead). But, if copy field targets of an updated field is also 
supported for an updateable dv, then it is updated as well.

{quote}
Hmmm... is that really the relevant question though?

I'm not sure how the existing (non-inplace) atomic update code behaves if you 
try to "inc" a date, but why does it matter for the isSupportedForInPlaceUpdate 
method?

if date "inc" is supported in the existing atomic update code, then 
whatever that code path looks like (to compute the new value) it should be the 
same for the new inplace update code.
if date "inc" is not supported in the existing atomic update code, then 
whatever the error is should be the same in the new inplace update code

Either way, I don't see why isSupportedForInPlaceUpdate should care – or if it 
is going to care, then it should care about the details (ie: return false for 
(dv only) date field w/ "inc", but true for (dv only) date field with "set")
{quote}

For now I've removed date field totally out of scope of this patch. If there is 
a update to date that is needed, it falls back to traditional atomic update. I 
can try to deal with the trie date field, if you suggest.

bq. let's put those details in a comment where this Exception is thrown ... or 
better yet, try to incorporate it into the Exception msg?
I had put this exception in the patch: {{Unable to resolve the last full doc in 
tlog fully, and document not found in index even after opening new rt searcher. 
}} but now I'll chang it to: {{Unable to resolve the last full doc in tlog 
fully, and document not found in index even after opening new rt searcher. If 
the doc was deleted, then there shouldn't have been an attempt to resolve to a 
previous document by that id.}}

bq. Ah, ok ... good point – can we go ahead and add some javadocs to that 
method as well making that clear?
Sure, I'll update the javadocs for that existing method as well.

> Support updates of numeric DocValues
> 
>
> 

SOLR-9181 FWIW, my beasting passed too

2016-07-08 Thread Erick Erickson


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7372) factor out a org.apache.lucene.search.FilterWeight class

2016-07-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7372:

Description: 
* {{FilterWeight}} to delegate method implementations to the {{Weight}} that it 
wraps
* exception: no delegating for the {{bulkScorer}} method implementation since 
currently not all FilterWeights implement/override that default method


  was:
* {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
that it wraps
* exception: no delegating for the {{bulkScorer}} method implementation since 
currently not all delegating weights implement/override that default method



> factor out a org.apache.lucene.search.FilterWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch, LUCENE-7372.patch
>
>
> * {{FilterWeight}} to delegate method implementations to the {{Weight}} that 
> it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all FilterWeights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7372) factor out a org.apache.lucene.search.FilterWeight class

2016-07-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7372:

Summary: factor out a org.apache.lucene.search.FilterWeight class  (was: 
factor out a org.apache.lucene.search.DelegatingWeight class)

> factor out a org.apache.lucene.search.FilterWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch, LUCENE-7372.patch
>
>
> * {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
> that it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all delegating weights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7372) factor out a org.apache.lucene.search.DelegatingWeight class

2016-07-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7372:

Attachment: LUCENE-7372.patch

> factor out a org.apache.lucene.search.DelegatingWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch, LUCENE-7372.patch
>
>
> * {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
> that it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all delegating weights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7372) factor out a org.apache.lucene.search.DelegatingWeight class

2016-07-08 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368202#comment-15368202
 ] 

Christine Poerschke commented on LUCENE-7372:
-

Thanks [~jpountz] for the quick review! Added extra constructor
{code}
protected FilterWeight(Weight weight) {
  super(weight.getQuery());
  this.in = weight;
}
{code}
as you suggested. It seems though that the {{FilterWeight(Query query, Weight 
weight)}} constructor variant is still needed to cater for 
[BlockJoinWeight|https://github.com/apache/lucene-solr/blob/master/lucene/join/src/java/org/apache/lucene/search/join/ToParentBlockJoinQuery.java#L119]
 and 
[ToChildBlockJoinWeight|https://github.com/apache/lucene-solr/blob/master/lucene/join/src/java/org/apache/lucene/search/join/ToChildBlockJoinQuery.java#L85]
 usage?

> factor out a org.apache.lucene.search.DelegatingWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch, LUCENE-7372.patch
>
>
> * {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
> that it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all delegating weights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 306 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/306/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:49722/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_61B511A6C664B3F5-001/tempDir-002/mytestbackup/

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:49722/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_61B511A6C664B3F5-001/tempDir-002/mytestbackup/
at 
__randomizedtesting.SeedInfo.seed([61B511A6C664B3F5:E9E12E7C6898DE0D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:207)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368165#comment-15368165
 ] 

Hoss Man commented on SOLR-5944:



I've not had a chance to look at the latest patch, but here's some comment 
responses...

bq.  Since commitWithin and overwrite was being set here, I thought this is an 
appropriate place to set the prevVersion to the cmd

But there's a fundemental difference betwen params like {{commitWithin}} and 
{{overwrite}} and the new {{prevVersion}} param...

{{commitWithin}} and {{overwrite}} are _client_ specified options specific to 
the xml/javabin update format(s).  The fact that they can be specified as 
request params is an implementation detail of the xml/javabin formats that they 
happen to have in common, but are not exclusively specifyied as params -- for 
example the XMLLoader only uses the params as defaults, they can be psecified 
on a per {{}} basis.

The new {{prevVersion}} param however is an implementation detail of DUP ... 
DUP is the _only_ code that should have to know/care that {{prevVersion}} comes 
from a request param.

bq. Yes, this was intentional, and I think it doesn't make any difference. If 
an "id" isn't found in any of these maps, it would mean that the previous 
update was committed and should be looked up in the index. 
bq. I think we don't need to worry. Upon receiving a prevPointer=-1 by whoever 
reads this LogPtr, it should be clear why it was -1: if the command's 
{{flags|UpdateLog.UPDATE_INPLACE}} is set, then this command is an in-place 
update whose previous update is in the index and not in the tlog; if that flag 
is not set, it is not an in-place update at all, and don't bother about the 
prevPointer value at all (which is -1 as a dummy value).

We should have a comment to these affects (literally we could just paste that 
text directly into a comment) when declaring the prevPointer variable in this 
method.

bq. ... This was needed because the lucene document that was originally being 
returned had copy fields targets of id field, default fields, multiple Field 
per field (due to FieldType.createFields()) etc., which are not needed for 
in-place updates.

Hmm... that makes me wonder -- we should make sure we have a test case of doing 
atomic updates on numeric dv fields which have copyfields to other numeric 
fields.  ie: lets make sure our "is this a candidate for inplace updates" takes 
into acount that the value being updated might need copied to another field.

(in theory if both the source & dest of the copy field are single valued dv 
only then we can still do the in place updated as long as the copyField 
happens, but even if we don't have that extra bit of logic we need a test that 
the udpates are happening consistently)

bq.  I wasn't sure how to deal with inc for dates, so left dates out of this 
for simplicity for now

Hmmm... is that really the relevant question though?

I'm not sure how the existing (non-inplace) atomic update code behaves if you 
try to "inc" a date, but why does it matter for the 
{{isSupportedForInPlaceUpdate}} method?

* if date "inc" is supported in the existing atomic update code, then whatever 
that code path looks like (to compute the new value) it should be the same for 
the new inplace update code.
* if date "inc" is _not_ supported in the existing atomic update code, then 
whatever the error is should be the same in the new inplace update code

Either way, I don't see why {{isSupportedForInPlaceUpdate}} should care -- or 
if it is going to care, then it should care about the details (ie: return false 
for (dv only) date field w/ "inc", but true for (dv only) date field with "set")

bq. I think this is fatal, since if the doc was deleted, then there shouldn't 
have been an attempt to resolve to a previous document by that id. I think this 
should never be triggered.

let's put those details in a comment where this Exception is thrown ... or 
better yet, try to incorporate it into the Exception msg?

bq. I'm inclined to keep it to Long/null instead of long/-1, since 
versionInfo.getVersionFromIndex() is also Long/null

Ah, ok ... good point -- can we go ahead and add some javadocs to that method 
as well making that clear?


bq. ... I've changed this to now use the UpdateShardHandler's httpClient.

Ok, cool ... Yeah, that probably makes more sense in general.

bq. Not sure what needs to be done more here

Yeah, sorry -- that was a vague comment that even i don't know what i ment by, 
was probably ment to be part of the note about the switch statemenet default.



> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 1094 - Failure!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1094/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0},  from server:  http://127.0.0.1:37041/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  http://127.0.0.1:37041/collection1
at 
__randomizedtesting.SeedInfo.seed([AAC852C33EAC4FDB:229C6D1990502223]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:159)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] lucene-solr issue #49: SOLR-9279 Adds comparison function queries

2016-07-08 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/49
  
Just one thing -- have compare() take the FunctionValue so that a compare 
impl can choose to call doubleVal vs longVal or whatever else.  And the impls 
you add to Solr can call doubleVal.  Someone truly might want to extend this to 
call something other than doubleVal; the set of values of doubleVal is disjoint 
from longVal.  Or maybe someone has got the data in objectVal for some reason.

After that please post a .patch file to JIRA.  
https://wiki.apache.org/lucene-java/HowToContribute#Creating_a_patch   though 
those instructions should be modified to indicate how to generate a diff from 
the point the current branch diverged from master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9279) Add greater than, less than, etc in Solr function queries

2016-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368155#comment-15368155
 ] 

ASF GitHub Bot commented on SOLR-9279:
--

Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/49
  
Just one thing -- have compare() take the FunctionValue so that a compare 
impl can choose to call doubleVal vs longVal or whatever else.  And the impls 
you add to Solr can call doubleVal.  Someone truly might want to extend this to 
call something other than doubleVal; the set of values of doubleVal is disjoint 
from longVal.  Or maybe someone has got the data in objectVal for some reason.

After that please post a .patch file to JIRA.  
https://wiki.apache.org/lucene-java/HowToContribute#Creating_a_patch   though 
those instructions should be modified to indicate how to generate a diff from 
the point the current branch diverged from master.


> Add greater than, less than, etc in Solr function queries
> -
>
> Key: SOLR-9279
> URL: https://issues.apache.org/jira/browse/SOLR-9279
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Doug Turnbull
> Fix For: master (7.0)
>
>
> If you use the "if" function query, you'll often expect to be able to use 
> greater than/less than functions. For example, you might want to boost books 
> written in the past 7 years. Unfortunately, there's no "greater than" 
> function query that will return non-zero when the lhs > rhs. Instead to get 
> this, you need to create really awkward function queries like I do here 
> (http://opensourceconnections.com/blog/2014/11/26/stepwise-date-boosting-in-solr/):
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> The pull request attached to this Jira adds the following function queries
> (https://github.com/apache/lucene-solr/pull/49)
> -gt(lhs, rhs) (returns 1 if lhs > rhs, 0 otherwise)
> -lt(lhs, rhs) (returns 1 if lhs < rhs, 0 otherwise)
> -gte
> -lte
> -eq
> So instead of 
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> one could now write
> if(lt(ms(mydatefield),315569259747,0.8,1)
> (if mydatefield < 315569259747 then 0.8 else 1)
> A bit more readable and less puzzling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368111#comment-15368111
 ] 

David Smiley commented on LUCENE-7355:
--

+1

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Using "escape"

2016-07-08 Thread Mila88
I want to use escape with query parser, But I'm quite new and not sure how to
add it. Here is my example project, and here is the link to escape
http://lucene.apache.org/core/3_0_3/api/all/org/apache/lucene/queryParser/QueryParser.html#escape%28java.lang.String%29



import java.io.File;
import java.io.IOException;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

public class Searcher {

   IndexSearcher indexSearcher;
   QueryParser queryParser;
   Query query;
   
   public Searcher(String indexDirectoryPath) 
  throws IOException{
  Directory indexDirectory = 
 FSDirectory.open(new File(indexDirectoryPath));
  indexSearcher = new IndexSearcher(indexDirectory);
  queryParser = new QueryParser(Version.LUCENE_36,
 LuceneConstants.CONTENTS,
 new StandardAnalyzer(Version.LUCENE_36));
   }
   //QueryParser parser=new
QueryParser(Version.LUCENE_30,langCode,this.getAnalyzer());
   // Query query=parser.parse(queryString);
  // int maxSearchLength=1000;
  // TopDocs topDocs=searcher.search(query,null,maxSearchLength);
   public TopDocs search( String searchQuery) 
  throws IOException, ParseException{
  query = queryParser.parse(searchQuery);
  return indexSearcher.search(query, LuceneConstants.MAX_SEARCH);
   }

   
   
  //  get this document 
   public Document getDocument(ScoreDoc scoreDoc) 
  throws CorruptIndexException, IOException{
  return indexSearcher.doc(scoreDoc.doc);   
   }

   public void close() throws IOException{
  indexSearcher.close();
   }
}



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-escape-tp4286398.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+125) - Build # 17189 - Failure!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17189/
Java: 32bit/jdk-9-ea+125 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:36262/solr/testSolrCloudCollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException 
occured when talking to server at: 
http://127.0.0.1:36262/solr/testSolrCloudCollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([71E29F399B4A472E:4C3A3115A3A4195E]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1265 - Still Failing

2016-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1265/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.overseer.ZkStateWriterTest: 1) Thread[id=5830, 
name=watches-704-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.overseer.ZkStateWriterTest: 
   1) Thread[id=5830, name=watches-704-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([DEE4D3305641E362]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5830, name=watches-704-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=5830, name=watches-704-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([DEE4D3305641E362]:0)




Build Log:
[...truncated 11105 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateWriterTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/temp/solr.cloud.overseer.ZkStateWriterTest_DEE4D3305641E362-001/init-core-data-001
   [junit4]   2> 627825 INFO  
(SUITE-ZkStateWriterTest-seed#[DEE4D3305641E362]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 627832 INFO  
(TEST-ZkStateWriterTest.testSingleExternalCollection-seed#[DEE4D3305641E362]) [ 
   ] o.a.s.SolrTestCaseJ4 ###Starting testSingleExternalCollection
   [junit4]   2> 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5969 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5969/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([7C97354B3DD259B2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:62219/solr/testSolrCloudCollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException 
occured when talking to server at: 
http://127.0.0.1:62219/solr/testSolrCloudCollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([7C97354B3DD259B2:414F9B67053C07C2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 

[jira] [Resolved] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-07-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7276.

   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

Thanks [~jim.ferenczi]!

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367998#comment-15367998
 ] 

ASF subversion and git services commented on LUCENE-7276:
-

Commit df2207c5dcf379af25d12ef3b3cd7f44bad4fdff in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df2207c ]

LUCENE-7276: MatchNoDocsQuery now inclues an optional reason for why it was used


> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367995#comment-15367995
 ] 

ASF subversion and git services commented on LUCENE-7276:
-

Commit cbbc505268e8fa994fa21383ed49a91b2e923f66 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cbbc505 ]

LUCENE-7276: MatchNoDocsQuery now inclues an optional reason for why it was used


> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1065 - Still Failing

2016-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1065/

13 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2052, name=collection2, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2052, name=collection2, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58766/um_/zt: collection already exists: 
awholynewstresscollection_collection2_0
at __randomizedtesting.SeedInfo.seed([703BB3FA0669690C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1270)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:988)


FAILED:  org.apache.solr.core.OpenCloseCoreStressTest.test10Minutes

Error Message:
Captured an uncaught exception in thread: Thread[id=5153, name=Thread-2287, 
state=RUNNABLE, group=TGRP-OpenCloseCoreStressTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5153, name=Thread-2287, state=RUNNABLE, 
group=TGRP-OpenCloseCoreStressTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at __randomizedtesting.SeedInfo.seed([703BB3FA0669690C]:0)
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.(String.java:207)
at java.lang.String.substring(String.java:1969)
at java.util.StringTokenizer.nextToken(StringTokenizer.java:352)
at javax.crypto.Cipher.tokenizeTransformation(Cipher.java:319)
at javax.crypto.Cipher.getTransforms(Cipher.java:429)
at javax.crypto.Cipher.getInstance(Cipher.java:503)
at sun.security.ssl.JsseJce.getCipher(JsseJce.java:229)
at sun.security.ssl.CipherBox.(CipherBox.java:179)
at sun.security.ssl.CipherBox.newCipherBox(CipherBox.java:263)
at 
sun.security.ssl.CipherSuite$BulkCipher.newCipher(CipherSuite.java:505)
at 
sun.security.ssl.CipherSuite$BulkCipher.isAvailable(CipherSuite.java:572)
at 
sun.security.ssl.CipherSuite$BulkCipher.isAvailable(CipherSuite.java:527)
at sun.security.ssl.CipherSuite.isAvailable(CipherSuite.java:194)
at 
sun.security.ssl.SSLContextImpl.getApplicableCipherSuiteList(SSLContextImpl.java:346)
at 
sun.security.ssl.SSLContextImpl.getDefaultCipherSuiteList(SSLContextImpl.java:304)
at sun.security.ssl.SSLSocketImpl.init(SSLSocketImpl.java:626)
at sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:567)
at 
sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:110)
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:363)
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:353)
at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at 
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 253 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/253/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:55521/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55521/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([11B7FE823D685B6E:99E3C15893943696]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-7372) factor out a org.apache.lucene.search.DelegatingWeight class

2016-07-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367824#comment-15367824
 ] 

Adrien Grand commented on LUCENE-7372:
--

This should probably be called {{FilterWeight}} to be consistent with the rest 
of the code base? Also, why does its constructor take a Query object too? I 
think it should only take a Weight object and use the query returned by 
Weight.getQuery() to pass to the parent constructor? Otherwise +1.

> factor out a org.apache.lucene.search.DelegatingWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch
>
>
> * {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
> that it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all delegating weights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7355:
-
Attachment: LUCENE-7355.patch

Fixing a typo.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8931) SolrCloud RebalanceShards API

2016-07-08 Thread olivier soyez (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367791#comment-15367791
 ] 

olivier soyez commented on SOLR-8931:
-

Great news about Bloomreach donated their Rebalance API. So, I think this 
feature is not needed anymore when SOLR-9241 is committed.

> SolrCloud RebalanceShards API
> -
>
> Key: SOLR-8931
> URL: https://issues.apache.org/jira/browse/SOLR-8931
> Project: Solr
>  Issue Type: Wish
>  Components: SolrCloud
>Reporter: olivier soyez
>Priority: Minor
>  Labels: patch
> Fix For: 6.0
>
> Attachments: SOLR-8931.patch
>
>
> It would be great to have RebalanceShards action in SolrCloud, such like 
> described in this post by Suruchi Shah : 
> "http://engineering.bloomreach.com/solrcloud-rebalance-api/;
> By the way, in order to rebalance shards from a collection with 
> replicationFactor > 1, one idea could be to split some shards using the 
> rule-based replica placement.
> Since https://issues.apache.org/jira/browse/SOLR-8728 jira issue, splitShard 
> is using rule-based replica placement (for the "replication" replicas).
> As part of a proof of concept, the attached patch introduce a new action to 
> the collections API, named "REBALANCESHARDS", to rebalance some or all shards 
> among solrCloud nodes using splitShard.
> After each splitShard, a deleteshard of the inactive parent shard is done.
> One mandatory parameter:
> - collection: the name of the collection
> Two parameters:
> - deltaMaxFromAverage (default: 20): use to select n shards (<= half of all 
> shards) to be split, whose number of docs are greater than 
> deltaMaxFromAverage percent of the average number of docs per shard
> - force (default: false): if true, in case of none shards selected with the 
> deltaMaxFromAverage given, all shards of the collection will be selected to 
> be split
> Use example:
> curl 
> 'http://ip:port/solr/admin/collections?action=REBALANCESHARDS=collection1=2=30'
> Drawbacks: replicationFactor must be more than one, select shards based on 
> the average number of docs per shard is not suitable for all cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 703 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/703/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([D5AA052C29EF59BA:BD1CD68EBC0B5824]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync(ZkSolrClientTest.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:62440/c8n_1x3_lf_shard1_replica2]

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live 

[jira] [Commented] (SOLR-8931) SolrCloud RebalanceShards API

2016-07-08 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367761#comment-15367761
 ] 

Cassandra Targett commented on SOLR-8931:
-

Bloomreach has now donated their Rebalance API, in SOLR-9241. While there is 
still a way to go with that issue, [~soyouz], do you think this feature is 
still needed when it is committed? I'll confess I haven't looked at your patch, 
just happened to come across this while looking for something else and 
remembered SOLR-9241 was recently created. 

> SolrCloud RebalanceShards API
> -
>
> Key: SOLR-8931
> URL: https://issues.apache.org/jira/browse/SOLR-8931
> Project: Solr
>  Issue Type: Wish
>  Components: SolrCloud
>Reporter: olivier soyez
>Priority: Minor
>  Labels: patch
> Fix For: 6.0
>
> Attachments: SOLR-8931.patch
>
>
> It would be great to have RebalanceShards action in SolrCloud, such like 
> described in this post by Suruchi Shah : 
> "http://engineering.bloomreach.com/solrcloud-rebalance-api/;
> By the way, in order to rebalance shards from a collection with 
> replicationFactor > 1, one idea could be to split some shards using the 
> rule-based replica placement.
> Since https://issues.apache.org/jira/browse/SOLR-8728 jira issue, splitShard 
> is using rule-based replica placement (for the "replication" replicas).
> As part of a proof of concept, the attached patch introduce a new action to 
> the collections API, named "REBALANCESHARDS", to rebalance some or all shards 
> among solrCloud nodes using splitShard.
> After each splitShard, a deleteshard of the inactive parent shard is done.
> One mandatory parameter:
> - collection: the name of the collection
> Two parameters:
> - deltaMaxFromAverage (default: 20): use to select n shards (<= half of all 
> shards) to be split, whose number of docs are greater than 
> deltaMaxFromAverage percent of the average number of docs per shard
> - force (default: false): if true, in case of none shards selected with the 
> deltaMaxFromAverage given, all shards of the collection will be selected to 
> be split
> Use example:
> curl 
> 'http://ip:port/solr/admin/collections?action=REBALANCESHARDS=collection1=2=30'
> Drawbacks: replicationFactor must be more than one, select shards based on 
> the average number of docs per shard is not suitable for all cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9212) Enable FastVectorHighlighter to Work on MultiPhraseQuery

2016-07-08 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-9212:

Issue Type: Improvement  (was: Bug)

> Enable FastVectorHighlighter to Work on MultiPhraseQuery
> 
>
> Key: SOLR-9212
> URL: https://issues.apache.org/jira/browse/SOLR-9212
> Project: Solr
>  Issue Type: Improvement
>  Components: highlighter
>Affects Versions: 5.3
> Environment: Linux, OSx, Windows
>Reporter: Esther Quansah
>
> FastVectorHighlighter will not highlight on MultiPhraseQuery - will instead 
> just skip and return results. 
> Example:
> I have synonyms.txt file and it contains
> break,breaks,broke,brake
> If I search for "brake vehicle", the query parses to MultiPhraseQuery with 
> brake vehicle, break vehicle, breaks vehicle, broke vehicle as possible 
> matches. Would like highlighting to occur on all of those results. Currently 
> there are no highlighting results at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Compaction logic

2016-07-08 Thread Konstantin
Hello, my name is Konstantin, I'm currently reading Lucene's sources and
wondering why particular technical decisions were made.

Full disclosure - I'm writing my own inverted index implementation as a pet
project https://github.com/kk00ss/Rhinodog . It's about 4 kloc of scala,
and there are tests comparing it with Lucene on wiki dump (I actually run
only on small part of it ~500MB).

Most interesting to me, is why compaction algorithm is implemented this way
- it's clear and simple, but wouldn't it be better to merge postings lists
on a per term basis. Well current Lucene implementation is probably better
for HDDs and proposed would need SSD to show adequate performance.
But that would be more of smaller compactions, each much chipper. Some
times if a term has small posting list - it would be inefficient, but I
think some threshold can be used.
This idea comes from an assumption that when half of the documents were
removed from a segment - not all the terms might need compaction,  assuming
non-uniform distribution of terms among documents (which seems likely to
me, an amateur ;-) ).

Does it make any sense ?
BTW, Any input about Rhinodog and it's benchmarks vs Lucene would be
appreciated.


[jira] [Updated] (LUCENE-7372) factor out a org.apache.lucene.search.DelegatingWeight class

2016-07-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7372:

Attachment: LUCENE-7372.patch

> factor out a org.apache.lucene.search.DelegatingWeight class
> 
>
> Key: LUCENE-7372
> URL: https://issues.apache.org/jira/browse/LUCENE-7372
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7372.patch
>
>
> * {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
> that it wraps
> * exception: no delegating for the {{bulkScorer}} method implementation since 
> currently not all delegating weights implement/override that default method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7372) factor out a org.apache.lucene.search.DelegatingWeight class

2016-07-08 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-7372:
---

 Summary: factor out a org.apache.lucene.search.DelegatingWeight 
class
 Key: LUCENE-7372
 URL: https://issues.apache.org/jira/browse/LUCENE-7372
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor
 Attachments: LUCENE-7372.patch

* {{DelegatingWeight}} to delegate method implementations to the {{Weight}} 
that it wraps
* exception: no delegating for the {{bulkScorer}} method implementation since 
currently not all delegating weights implement/override that default method




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9181) ZkStateReaderTest failure

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367672#comment-15367672
 ] 

ASF subversion and git services commented on SOLR-9181:
---

Commit 60232cd028e41c427b686a6cab720ac3989ba289 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=60232cd ]

SOLR-9181: Add some logging to ZkStateReader to try and debug test failures


> ZkStateReaderTest failure
> -
>
> Key: SOLR-9181
> URL: https://issues.apache.org/jira/browse/SOLR-9181
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9181-2.patch, SOLR-9181-2.patch, SOLR-9181.patch, 
> SOLR-9181.patch, SOLR-9181.patch, SOLR-9181.patch, stderr, stdout
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/243/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9181) ZkStateReaderTest failure

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367675#comment-15367675
 ] 

ASF subversion and git services commented on SOLR-9181:
---

Commit fda3d8b7c2069d9cbd2445b397e9cceb38851be6 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fda3d8b ]

SOLR-9181: Fix race in constructState() and missing call in 
forceUpdateCollection()


> ZkStateReaderTest failure
> -
>
> Key: SOLR-9181
> URL: https://issues.apache.org/jira/browse/SOLR-9181
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9181-2.patch, SOLR-9181-2.patch, SOLR-9181.patch, 
> SOLR-9181.patch, SOLR-9181.patch, SOLR-9181.patch, stderr, stdout
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/243/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9181) ZkStateReaderTest failure

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367674#comment-15367674
 ] 

ASF subversion and git services commented on SOLR-9181:
---

Commit 86d8d3a937802f47add8408bdd05117ec0fc2137 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=86d8d3a ]

SOLR-9181: More logging


> ZkStateReaderTest failure
> -
>
> Key: SOLR-9181
> URL: https://issues.apache.org/jira/browse/SOLR-9181
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9181-2.patch, SOLR-9181-2.patch, SOLR-9181.patch, 
> SOLR-9181.patch, SOLR-9181.patch, SOLR-9181.patch, stderr, stdout
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/243/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17187 - Failure!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17187/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([B2BB832CF5052F4A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=2052, name=searcherExecutor-907-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=2052, name=searcherExecutor-907-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 

[jira] [Commented] (SOLR-9279) Add greater than, less than, etc in Solr function queries

2016-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367649#comment-15367649
 ] 

ASF GitHub Bot commented on SOLR-9279:
--

Github user softwaredoug commented on the issue:

https://github.com/apache/lucene-solr/pull/49
  
Dumb question, what's the next steps? Do I need to do anything else here or 
at the JIRA ticket?


> Add greater than, less than, etc in Solr function queries
> -
>
> Key: SOLR-9279
> URL: https://issues.apache.org/jira/browse/SOLR-9279
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Doug Turnbull
> Fix For: master (7.0)
>
>
> If you use the "if" function query, you'll often expect to be able to use 
> greater than/less than functions. For example, you might want to boost books 
> written in the past 7 years. Unfortunately, there's no "greater than" 
> function query that will return non-zero when the lhs > rhs. Instead to get 
> this, you need to create really awkward function queries like I do here 
> (http://opensourceconnections.com/blog/2014/11/26/stepwise-date-boosting-in-solr/):
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> The pull request attached to this Jira adds the following function queries
> (https://github.com/apache/lucene-solr/pull/49)
> -gt(lhs, rhs) (returns 1 if lhs > rhs, 0 otherwise)
> -lt(lhs, rhs) (returns 1 if lhs < rhs, 0 otherwise)
> -gte
> -lte
> -eq
> So instead of 
> if(min(0,sub(ms(mydatefield),sub(ms(NOW),315569259747))),0.8,1)
> one could now write
> if(lt(ms(mydatefield),315569259747,0.8,1)
> (if mydatefield < 315569259747 then 0.8 else 1)
> A bit more readable and less puzzling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #49: SOLR-9279 Adds comparison function queries

2016-07-08 Thread softwaredoug
Github user softwaredoug commented on the issue:

https://github.com/apache/lucene-solr/pull/49
  
Dumb question, what's the next steps? Do I need to do anything else here or 
at the JIRA ticket?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361862#comment-15361862
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 7/8/16 11:44 AM:
-

New patch fixing all nocommits. Still a few additional tests, which Hoss 
mentioned, are TODO. Here's a stab at replying to Hoss' comments (Maybe I'll 
keep updating this comment itself as and when I fix some of the TODO items 
here):

 {panel:title=JettySolrRunner}
* javadocs, javadocs, javadocs {color:green}[FIXED]{color}
{panel}

{panel:title=XMLLoader + JavabinLoader}
* why is this param checks logic duplicated in these classes? {color:green}[Not 
sure what you mean here, I just set the prevVersion to the cmd here now]{color}
* why not put this in DUP (which already has access to the request params) when 
it's doing it's "FROMLEADER" logic? {color:green}[Since commitWithin and 
overwrite was being set here, I thought this is an appropriate place to set the 
prevVersion to the cmd]{color}
{panel}

{panel:title=AddUpdateCommand}
* these variables (like all variables) should have javadocs explaining what 
they are and what they mean {color:green}[FIXED]{color}
** people skimming a class shouldn't have to grep the code for a variable name 
to understand it's purpose
* having 2 variables here seems like it might be error prone?  what does it 
mean if {{prevVersion < 0 && isInPlaceUpdate == true}} ? or {{0 < prevVersion 
&& isInPlaceUpdate == false}} ? {color:green}[FIXED: Now just have one 
variable]{color}
** would it make more sense to use a single {{long prevVersion}} variable and 
have a {{public boolean isInPlaceUpdate()}} that simply does {{return (0 < 
prevVersion); }} ? {color:green}[FIXED]{color}
{panel}

{panel:title=TransactionLog}
* javadocs for both the new {{write}} method and the existig {{write}} method  
{color:green}[FIXED]{color}
** explain what "prevPointer" means and note in the 2 arg method what the 
effective default "prevPoint" is.
* we should really have some "int" constants for refering to the List indexes 
involved in these records, so instead of code like {{entry.get(3)}} sprinkled 
in various classes like UpdateLog and PeerSync it can be smething more readable 
like {{entry.get(PREV_VERSION_IDX)}}  {color:red}[TODO]{color}
{panel}


{panel:title=UpdateLog}
* javadocs for both the new {{LogPtr}} constructure and the existing 
constructor {color:green}[FIXED]{color}
** explain what "prevPointer" means and note in the 2 arg constructure what the 
effective default "prevPoint" is.  {color:green}[FIXED]{color}
* {{add(AddUpdateCommand, boolean)}}
** this new code for doing lookups in {{map}}, {{prevMap}} and {{preMap2}} 
seems weird to me (but admitedly i'm not really an expert on UpdateLog in 
general and how these maps are used
** what primarily concerns me is what the expected behavior is if the "id" 
isn't found in any of these maps -- it looks like prevPointer defaults to "-1" 
regardless of whether this is an inplace update ... is that intentional? ... is 
it possible there are older records we will miss and need to flag that?  
{color:green}[Yes, this was intentional, and I think it doesn't make any 
difference. If an "id" isn't found in any of these maps, it would mean that the 
previous update was committed and should be looked up in the index. ]{color}
** ie: do we need to worry about distinguising here between "not an in place 
update, therefore prePointer=-1" vs "is an in place update, but we can't find 
the prevPointer" ?? {color:green}[I think we don't need to worry. Upon 
receiving a prevPointer=-1 by whoever reads this LogPtr, it should be clear why 
it was -1: if the command's {{flags|UpdateLog.UPDATE_INPLACE}} is set, then 
this command is an in-place update whose previous update is in the index and 
not in the tlog; if that flag is not set, it is not an in-place update at all, 
and don't bother about the prevPointer value at all (which is -1 as a dummy 
value).]{color}
** assuming this code is correct, it might be a little easier to read if it 
were refactored into something like:{code}
// nocommit: jdocs
private synchronized long getPrevPointerForUpdate(AddUpdateCommand cmd) {
  // note: sync required to ensure maps aren't changed out form under us
  if (cmd.isInPlaceUpdate) {
BytesRef indexedId = cmd.getIndexedId();
for (Map currentMap : Arrays.asList(map, prevMap, 
prevMap2)) {
  LogPtr prevEntry = currentMap.get(indexedId);
  if (null != prevEntry) {
return prevEntry.pointer;
  }
}
  }
  return -1; // default when not inplace, or if we can't find a previous entry
}
{code} {color:green}[FIXED: Refactored into something similar to above]{color}
* {{applyPartialUpdates}}
** it seems like this method would be a really good candidate for some direct 
unit testing? {color:red}[TODO]{color}
*** ie: construct a synthetic 

[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5944:
---
Attachment: SOLR-5944.patch

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2016-07-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361862#comment-15361862
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 7/8/16 11:42 AM:
-

New patch fixing all nocommits. Still a few additional tests, which Hoss 
mentioned, are TODO. Here's a stab at replying to Hoss' comments (Maybe I'll 
keep updating this comment itself as and when I fix some of the TODO items 
here):

 {panel:title=JettySolrRunner}
* javadocs, javadocs, javadocs {color:green}[FIXED]{color}
{panel}

{panel:title=XMLLoader + JavabinLoader}
* why is this param checks logic duplicated in these classes? {color:red}[Not 
sure what you mean here, I just set the prevVersion to the cmd here now]{color}
* why not put this in DUP (which already has access to the request params) when 
it's doing it's "FROMLEADER" logic? {color:red}[Since commitWithin and 
overwrite was being set here, I thought this is an appropriate place to set the 
prevVersion to the cmd]{color}
{panel}

{panel:title=AddUpdateCommand}
* these variables (like all variables) should have javadocs explaining what 
they are and what they mean {color:green}[FIXED]{color}
** people skimming a class shouldn't have to grep the code for a variable name 
to understand it's purpose
* having 2 variables here seems like it might be error prone?  what does it 
mean if {{prevVersion < 0 && isInPlaceUpdate == true}} ? or {{0 < prevVersion 
&& isInPlaceUpdate == false}} ? {color:green}[FIXED: Now just have one 
variable]{color}
** would it make more sense to use a single {{long prevVersion}} variable and 
have a {{public boolean isInPlaceUpdate()}} that simply does {{return (0 < 
prevVersion); }} ? {color:green}[FIXED]{color}
{panel}

{panel:title=TransactionLog}
* javadocs for both the new {{write}} method and the existig {{write}} method  
{color:green}[FIXED]{color}
** explain what "prevPointer" means and note in the 2 arg method what the 
effective default "prevPoint" is.
* we should really have some "int" constants for refering to the List indexes 
involved in these records, so instead of code like {{entry.get(3)}} sprinkled 
in various classes like UpdateLog and PeerSync it can be smething more readable 
like {{entry.get(PREV_VERSION_IDX)}}  {color:red}[TODO]{color}
{panel}


{panel:title=UpdateLog}
* javadocs for both the new {{LogPtr}} constructure and the existing 
constructor {color:green}[FIXED]{color}
** explain what "prevPointer" means and note in the 2 arg constructure what the 
effective default "prevPoint" is.  {color:green}[FIXED]{color}
* {{add(AddUpdateCommand, boolean)}}
** this new code for doing lookups in {{map}}, {{prevMap}} and {{preMap2}} 
seems weird to me (but admitedly i'm not really an expert on UpdateLog in 
general and how these maps are used
** what primarily concerns me is what the expected behavior is if the "id" 
isn't found in any of these maps -- it looks like prevPointer defaults to "-1" 
regardless of whether this is an inplace update ... is that intentional? ... is 
it possible there are older records we will miss and need to flag that?  
{color:green}[Yes, this was intentional, and I think it doesn't make any 
difference. If an "id" isn't found in any of these maps, it would mean that the 
previous update was committed and should be looked up in the index. ]{color}
** ie: do we need to worry about distinguising here between "not an in place 
update, therefore prePointer=-1" vs "is an in place update, but we can't find 
the prevPointer" ?? {color:green}[I think we don't need to worry. Upon 
receiving a prevPointer=-1 by whoever reads this LogPtr, it should be clear why 
it was -1: if the command's {{flags|UpdateLog.UPDATE_INPLACE}} is set, then 
this command is an in-place update whose previous update is in the index and 
not in the tlog; if that flag is not set, it is not an in-place update at all, 
and don't bother about the prevPointer value at all (which is -1 as a dummy 
value).]{color}
** assuming this code is correct, it might be a little easier to read if it 
were refactored into something like:{code}
// nocommit: jdocs
private synchronized long getPrevPointerForUpdate(AddUpdateCommand cmd) {
  // note: sync required to ensure maps aren't changed out form under us
  if (cmd.isInPlaceUpdate) {
BytesRef indexedId = cmd.getIndexedId();
for (Map currentMap : Arrays.asList(map, prevMap, 
prevMap2)) {
  LogPtr prevEntry = currentMap.get(indexedId);
  if (null != prevEntry) {
return prevEntry.pointer;
  }
}
  }
  return -1; // default when not inplace, or if we can't find a previous entry
}
{code} {color:green}[FIXED: Refactored into something similar to above]{color}
* {{applyPartialUpdates}}
** it seems like this method would be a really good candidate for some direct 
unit testing? {color:red}[TODO]{color}
*** ie: construct a synthetic 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3396 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3396/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:50997/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:50997/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([EC4796D09DE7CB9A:6413A90A331BA662]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1270)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-07-08 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367524#comment-15367524
 ] 

Varun Thacker commented on SOLR-9242:
-

We have a test failure for windows :

Log excerpt :
{code}
   [junit4]   2> Caused by: java.nio.file.InvalidPathException: Illegal char 
<:> at index 2: 
/C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_168D4B6DEE507089-001/tempDir-002/mytestbackup
   [junit4]   2>at 
sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
   [junit4]   2>at 
sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
   [junit4]   2>at 
sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
   [junit4]   2>at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
   [junit4]   2>at 
sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
   [junit4]   2>at java.nio.file.Paths.get(Paths.java:84)
   [junit4]   2>at 
org.apache.solr.core.backup.repository.LocalFileSystemRepository.createURI(LocalFileSystemRepository.java:62)
   [junit4]   2>at 
org.apache.solr.handler.SnapShooter.initialize(SnapShooter.java:85)
   [junit4]   2>at 
org.apache.solr.handler.SnapShooter.(SnapShooter.java:79)
   [junit4]   2>at 
org.apache.solr.handler.admin.CoreAdminOperation$19.call(CoreAdminOperation.java:873)
   [junit4]   2>... 30 more
{code}

Jenkins failure link : 
http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/305/

> Collection level backup/restore should provide a param for specifying the 
> repository implementation it should use
> -
>
> Key: SOLR-9242
> URL: https://issues.apache.org/jira/browse/SOLR-9242
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Fix For: 6.2
>
> Attachments: SOLR-9242.patch, SOLR-9242.patch, SOLR-9242.patch, 
> SOLR-9242.patch, SOLR-9242.patch
>
>
> SOLR-7374 provides BackupRepository interface to enable storing Solr index 
> data to a configured file-system (e.g. HDFS, local file-system etc.). This 
> JIRA is to track the work required to extend this functionality at the 
> collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 323 - Still Failing

2016-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/323/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:52217/xd_/mw","node_name":"127.0.0.1:52217_xd_%2Fmw","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:45978/xd_/mw;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:45978_xd_%2Fmw"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:34076/xd_/mw;,   
"node_name":"127.0.0.1:34076_xd_%2Fmw",   "state":"down"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:52217/xd_/mw;,   
"node_name":"127.0.0.1:52217_xd_%2Fmw",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:52217/xd_/mw","node_name":"127.0.0.1:52217_xd_%2Fmw","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:45978/xd_/mw;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:45978_xd_%2Fmw"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:34076/xd_/mw;,
  "node_name":"127.0.0.1:34076_xd_%2Fmw",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:52217/xd_/mw;,
  "node_name":"127.0.0.1:52217_xd_%2Fmw",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([FFC4AB25DE2370A9:779094FF70DF1D51]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 305 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/305/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:56172/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_168D4B6DEE507089-001/tempDir-002/mytestbackup/

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56172/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_168D4B6DEE507089-001/tempDir-002/mytestbackup/
at 
__randomizedtesting.SeedInfo.seed([168D4B6DEE507089:9ED974B740AC1D71]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:207)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-9181) ZkStateReaderTest failure

2016-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367420#comment-15367420
 ] 

ASF subversion and git services commented on SOLR-9181:
---

Commit be8d56ada69c885342bfae80d73f9f5b89c11504 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be8d56a ]

SOLR-9181: Fix race in constructState() and missing call in 
forceUpdateCollection()


> ZkStateReaderTest failure
> -
>
> Key: SOLR-9181
> URL: https://issues.apache.org/jira/browse/SOLR-9181
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9181-2.patch, SOLR-9181-2.patch, SOLR-9181.patch, 
> SOLR-9181.patch, SOLR-9181.patch, SOLR-9181.patch, stderr, stdout
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/243/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5968 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5968/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([E23341F9FC062E83:6A677E2352FA437B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:209)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11211 lines...]
  

[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts

2016-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367410#comment-15367410
 ] 

Noble Paul commented on SOLR-7280:
--

Had a chat with [~shalinmangar] and came up with the following design.

h4. Objectives

* Move away from the current design of infinite number of threads for core 
loads which leads to OOM or other issues
* Avoid the leaderVoitWait problem which leads to shards with no leader for a 
long time or even (down shards)

Blindly sorting cores based on replica names is not foolproof. It can lead to 
deadlocks depending on how the replicas are distributed. The sorting logic 
could be as follows.

h5. Core Sorting logic
When a node comes up, it reads the list of live nodes and the states  of each 
collection it hosts. Construct a List of shards {{collectionName+shardName}} it 
hosts sorted by the (no:of replicas for that shard in other started nodes + 
no:of replicas present in the current node for that replica) . Break the tie by 
sorting the name in alphabetic {{collectionName+shardName}}  order. This 
ensures that no other node is waiting for some replica in this node to be up.

h5. Thread count
The default no:of {{coreLoadThreads}} should be much higher for SolrCloud 
(Maybe 50 ?). The user should be able to override the value by explicitly 
configuring it. 
 


> Load cores in sorted order and tweak coreLoadThread counts to improve cluster 
> stability on restarts
> ---
>
> Key: SOLR-7280
> URL: https://issues.apache.org/jira/browse/SOLR-7280
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7280.patch
>
>
> In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order 
> and tweaking some of the coreLoadThread counts, he was able to improve the 
> stability of a cluster with thousands of collections. We should explore some 
> of these changes and fold them into Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+125) - Build # 17185 - Failure!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17185/
Java: 32bit/jdk-9-ea+125 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([6F0B43B6F6D42519:70B1324126B4E3DC]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:192)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:43)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 

[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367376#comment-15367376
 ] 

Mikhail Khludnev commented on SOLR-9256:


Can you declare same jdbc twice and use the second with the second entity?

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1, 6.1
> Environment: Solr 6.0, 6.0.1, 6.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Updated] (SOLR-9181) ZkStateReaderTest failure

2016-07-08 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9181:

Attachment: SOLR-9181-2.patch

Think I've got it now - there was one case in forceUpdateCollection() where 
constructState() wasn't being called.  I'm going to commit this to master and 
watch for the next few hours, and then backport.

> ZkStateReaderTest failure
> -
>
> Key: SOLR-9181
> URL: https://issues.apache.org/jira/browse/SOLR-9181
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9181-2.patch, SOLR-9181-2.patch, SOLR-9181.patch, 
> SOLR-9181.patch, SOLR-9181.patch, SOLR-9181.patch, stderr, stdout
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/243/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spikes during heavy indexing when SSL is enabled

2016-07-08 Thread Johannes Meyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367313#comment-15367313
 ] 

Johannes Meyer commented on SOLR-9290:
--

We have the same issue on Solr 6.1.0

> TCP-connections in CLOSE_WAIT spikes during heavy indexing when SSL is enabled
> --
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7355:
-
Attachment: LUCENE-7355.patch

Patch that updates javadocs of #attributeFactory to not be specific to 
normalization (even though it is only used for normalization in practice for 
now).

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3395 - Still Failing!

2016-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3395/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49995/c8n_1x3_lf_shard1_replica1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49995/c8n_1x3_lf_shard1_replica1]
at 
__randomizedtesting.SeedInfo.seed([6241088B271E26AC:EA15375189E24B54]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:753)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at