[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_67) - Build # 4252 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4252/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([26DD394C9EC27162:91F02DD459C597BB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.ra

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11235 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11235/
Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([8792CBE821B4E3D8:30BFDF70E6B30501]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2156 - Failure

2014-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2156/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:46327/rmxe, http://127.0.0.1:46371/rmxe, 
http://127.0.0.1:46322/rmxe, http://127.0.0.1:46344/rmxe, 
http://127.0.0.1:46362/rmxe]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:46327/rmxe, 
http://127.0.0.1:46371/rmxe, http://127.0.0.1:46322/rmxe, 
http://127.0.0.1:46344/rmxe, http://127.0.0.1:46362/rmxe]
at 
__randomizedtesting.SeedInfo.seed([EA208459C4A53A43:6BC60A41B3FA5A7F]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4906 - Failure

2014-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4906/

5 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([D95C30E2B2CC7763:6E71247A75CB91BA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stateme

Re: 5.0 release status?

2014-10-04 Thread Ryan Ernst
On Oct 4, 2014 9:35 PM, "Jack Krupansky"  wrote:
>
> Maybe I just can’t fully make sense of LUCENE-5934 – does it corrupt all
4.x indexes, or some, or under some conditions? I mean, I had the
impression that it was only non-GA 4.0 indexes. And was it only 4.10 that
was doing this, or 4.0 GA through 4.9 as well?

The bug only affected people using the 4.10.0 release to read 4.0
beta/final segments (it thought they were 3x indexes).

>
> In any case, I’m still not clear on the direct benefits to users of, say,
4.9 upgrading to 5.0 indexes. Any performance improvement? Any disk space
reduction? Any RAM reduction?

Again, read through all the stuff Robert has mentioned, read through
lucene/CHANGES.txt, read the issues that are currently open. Your previous
comments have suggested users upgrading to 5.0 would only do so so they can
eventually upgrade to 6.0, implying they wouldn't upgrade their indexes for
minor releases. This simply is not the best advice. Look back at 4.9 and
4.10 for recent improvements in heap usage for doc values and norms for
example. Going back farther, someone still on 4.0 doesn't benefit from the
postings format improvements in 4.1. Users should upgrade their format
whenever possible because improvements are always happening.

>
> -- Jack Krupansky
>
> From: Ryan Ernst
> Sent: Sunday, October 5, 2014 12:24 AM
> To: dev@lucene.apache.org
> Subject: Re: 5.0 release status?
>
>
>
> On Oct 4, 2014 9:13 PM, "Jack Krupansky"  wrote:
> >
> > Thanks for the further clarification. In short, the legacy of 3.x
support was destabilizing 4.x itself (including testing), not just
interfering with 6.x moving forward beyond 3.x index compatibility. So, 5.x
will have less baggage holding it down than 4.x has today.
> >
> > I still need answers to:
> >
> > 1. Will users of 5.0 get any immediate benefit by reindexing or
otherwise "upgrading" their 4.x indexes to 5.0?
>
> Yes, for all the reasons Robert already mentioned.
>
> >
> > 2. What is the easiest, most efficient way for users of 5.0 to upgrade
their 4.x indexes to 5.0 so that they will not have to worry or do anything
when 6.0 comes out?
>
> Again, users should always upgrade if possible. There are improvements
for memory and speed all the time. Currently they can use the IndexUpgrader
(offline) or wrap there merge policy with UpgradeIndexMergePolicy (although
both currently act like an optimize on the old segments, im hoping to
change that soon).
>
> Ryan
>
> >
> > -- Jack Krupansky
> >
> > -Original Message- From: Robert Muir
> > Sent: Saturday, October 4, 2014 10:43 PM
> >
> > To: dev@lucene.apache.org
> > Subject: Re: 5.0 release status?
> >
> > On Sat, Oct 4, 2014 at 12:35 PM, Jack Krupansky 
wrote:
> >>
> >> I tried to follow all of the trunk 6/branch 5x discussion, but...
AFAICT
> >> there was no explicit decision or even implication that a release 5.0
would
> >> be imminent or that there would not be a 4.11 release. AFAICT, the
whole
> >> trunk 6/branch 5x decision was more related to wanting to have a trunk
that
> >> eliminated the 4x deprecations and was no longer constrained by
> >> compatibility with the 4x index – let me know if I am wrong about that
in
> >> any way! But I did see a comment on one Jira referring to “preparation
for a
> >> 5.0 release”, so I wanted to inquire about intentions. So, is a 5.0
release
> >> “coming soon”, or are 4.11, 4.12, 4.13... equally likely?
> >
> >
> > I created a branch_5x because 3.x index support was responsible for
> > multiple recent corruption bugs, some of which starting impacting 4.x
> > indexes.
> >
> > Especially bad were:
> > LUCENE-5907: 3.x back compat code corrupts (not just can't read) your
index.
> > LUCENE-5934: 3.x back compat code corrupts (not just can't read) your
4.0 index.
> > LUCENE-5975: 3.x back compat code reports a false corruption (was
> > indeed a bug in those versions of lucene) for 3.0-3.3 indexes.
> >
> > Whenever I see patterns in corruptions then I see it as a systemic
> > problem and aggressively work to do something about it. I've seen
> > several lately, but these are the relevant ones:
> >
> > 3.x back compat: 3.x didn't have a codec API, so its wedged in, and
> > pretty hard. Its not that we were lazy, its that its radically
> > different: doesn't separate data by fields, sorts terms differently,
> > uses shared docstores, writes field numbers implicitly, ... We try to
> > emulate it the best we can for testing, but the emulation can't really
> > be perfect, so in such places: surprise, bugs. The only way to stop
> > these corruptions is to stop supporting it.
> >
> > test infrastructure: IMO lucene 4 wasn't really ready to support
> > multiple index formats from a test perspective, so we cheated and try
> > to emulate old formats and rotate them across all tests. This works
> > ok, but its horrible to debug (since
> > these are essentially integration tests), the false failure rate is
> > extremely high, and the complexity of the implementation is high.

Re: 5.0 release status?

2014-10-04 Thread Jack Krupansky
Maybe I just can’t fully make sense of LUCENE-5934 – does it corrupt all 4.x 
indexes, or some, or under some conditions? I mean, I had the impression that 
it was only non-GA 4.0 indexes. And was it only 4.10 that was doing this, or 
4.0 GA through 4.9 as well?

In any case, I’m still not clear on the direct benefits to users of, say, 4.9 
upgrading to 5.0 indexes. Any performance improvement? Any disk space 
reduction? Any RAM reduction?

-- Jack Krupansky

From: Ryan Ernst 
Sent: Sunday, October 5, 2014 12:24 AM
To: dev@lucene.apache.org 
Subject: Re: 5.0 release status?


On Oct 4, 2014 9:13 PM, "Jack Krupansky"  wrote:
>
> Thanks for the further clarification. In short, the legacy of 3.x support was 
> destabilizing 4.x itself (including testing), not just interfering with 6.x 
> moving forward beyond 3.x index compatibility. So, 5.x will have less baggage 
> holding it down than 4.x has today.
>
> I still need answers to:
>
> 1. Will users of 5.0 get any immediate benefit by reindexing or otherwise 
> "upgrading" their 4.x indexes to 5.0?

Yes, for all the reasons Robert already mentioned.

>
> 2. What is the easiest, most efficient way for users of 5.0 to upgrade their 
> 4.x indexes to 5.0 so that they will not have to worry or do anything when 
> 6.0 comes out?

Again, users should always upgrade if possible. There are improvements for 
memory and speed all the time. Currently they can use the IndexUpgrader 
(offline) or wrap there merge policy with UpgradeIndexMergePolicy (although 
both currently act like an optimize on the old segments, im hoping to change 
that soon).

Ryan

>
> -- Jack Krupansky
>
> -Original Message- From: Robert Muir
> Sent: Saturday, October 4, 2014 10:43 PM
>
> To: dev@lucene.apache.org
> Subject: Re: 5.0 release status?
>
> On Sat, Oct 4, 2014 at 12:35 PM, Jack Krupansky  
> wrote:
>>
>> I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT
>> there was no explicit decision or even implication that a release 5.0 would
>> be imminent or that there would not be a 4.11 release. AFAICT, the whole
>> trunk 6/branch 5x decision was more related to wanting to have a trunk that
>> eliminated the 4x deprecations and was no longer constrained by
>> compatibility with the 4x index – let me know if I am wrong about that in
>> any way! But I did see a comment on one Jira referring to “preparation for a
>> 5.0 release”, so I wanted to inquire about intentions. So, is a 5.0 release
>> “coming soon”, or are 4.11, 4.12, 4.13... equally likely?
>
>
> I created a branch_5x because 3.x index support was responsible for
> multiple recent corruption bugs, some of which starting impacting 4.x
> indexes.
>
> Especially bad were:
> LUCENE-5907: 3.x back compat code corrupts (not just can't read) your index.
> LUCENE-5934: 3.x back compat code corrupts (not just can't read) your 4.0 
> index.
> LUCENE-5975: 3.x back compat code reports a false corruption (was
> indeed a bug in those versions of lucene) for 3.0-3.3 indexes.
>
> Whenever I see patterns in corruptions then I see it as a systemic
> problem and aggressively work to do something about it. I've seen
> several lately, but these are the relevant ones:
>
> 3.x back compat: 3.x didn't have a codec API, so its wedged in, and
> pretty hard. Its not that we were lazy, its that its radically
> different: doesn't separate data by fields, sorts terms differently,
> uses shared docstores, writes field numbers implicitly, ... We try to
> emulate it the best we can for testing, but the emulation can't really
> be perfect, so in such places: surprise, bugs. The only way to stop
> these corruptions is to stop supporting it.
>
> test infrastructure: IMO lucene 4 wasn't really ready to support
> multiple index formats from a test perspective, so we cheated and try
> to emulate old formats and rotate them across all tests. This works
> ok, but its horrible to debug (since
> these are essentially integration tests), the false failure rate is
> extremely high, and the complexity of the implementation is high. Its
> not just that it misses to find some bugs, it was actually directly
> responsible for corruption bugs like LUCENE-5377. But throughout 4.x,
> we have fixed the situation and added BaseXYZFormat tests for each
> part of an index format. Now we have reliable unit tests for each part
> of the abstract codec API: adding new tests here finds old bugs and
> prevents new ones in the future. For example I fixed several minor
> bugs in 4.x's CFS code just the last few days with this approach.
>
> there are also other patterns like deleting files, commit fallback
> logic, exception handling, addIndexes, etc that we have put
> substantial work into recently for 5.0. Whatever was safe to backport
> to bugfix releases, we tried, but some of these kinds of "fixes" are
> just too heavy for a bugfix branch, and many just cannot even be done
> as long as 3.x support exists. There is also some hardening in the 5.0
> index format itself 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11388 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11388/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrServer

Error Message:
ERROR: SolrIndexSearcher opens=20 closes=19

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=20 closes=19
at __randomizedtesting.SeedInfo.seed([FA3DEDEFD872D217]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:440)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrServer

Error Message:
20 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.TestLBHttpSolrServer: 1) Thread[id=139, 
name=qtp1917190768-139, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrServer]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:342) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:526)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:44)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=141, 
name=searcherExecutor-90-thread-1, state=WAITING, 
group=TGRP-TestLBHttpSolrServer] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=149, 
name=qtp2126725885-149, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrServer]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.lock

Re: 5.0 release status?

2014-10-04 Thread Ryan Ernst
On Oct 4, 2014 9:13 PM, "Jack Krupansky"  wrote:
>
> Thanks for the further clarification. In short, the legacy of 3.x support
was destabilizing 4.x itself (including testing), not just interfering with
6.x moving forward beyond 3.x index compatibility. So, 5.x will have less
baggage holding it down than 4.x has today.
>
> I still need answers to:
>
> 1. Will users of 5.0 get any immediate benefit by reindexing or otherwise
"upgrading" their 4.x indexes to 5.0?

Yes, for all the reasons Robert already mentioned.

>
> 2. What is the easiest, most efficient way for users of 5.0 to upgrade
their 4.x indexes to 5.0 so that they will not have to worry or do anything
when 6.0 comes out?

Again, users should always upgrade if possible. There are improvements for
memory and speed all the time. Currently they can use the IndexUpgrader
(offline) or wrap there merge policy with UpgradeIndexMergePolicy (although
both currently act like an optimize on the old segments, im hoping to
change that soon).

Ryan

>
> -- Jack Krupansky
>
> -Original Message- From: Robert Muir
> Sent: Saturday, October 4, 2014 10:43 PM
>
> To: dev@lucene.apache.org
> Subject: Re: 5.0 release status?
>
> On Sat, Oct 4, 2014 at 12:35 PM, Jack Krupansky 
wrote:
>>
>> I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT
>> there was no explicit decision or even implication that a release 5.0
would
>> be imminent or that there would not be a 4.11 release. AFAICT, the whole
>> trunk 6/branch 5x decision was more related to wanting to have a trunk
that
>> eliminated the 4x deprecations and was no longer constrained by
>> compatibility with the 4x index – let me know if I am wrong about that in
>> any way! But I did see a comment on one Jira referring to “preparation
for a
>> 5.0 release”, so I wanted to inquire about intentions. So, is a 5.0
release
>> “coming soon”, or are 4.11, 4.12, 4.13... equally likely?
>
>
> I created a branch_5x because 3.x index support was responsible for
> multiple recent corruption bugs, some of which starting impacting 4.x
> indexes.
>
> Especially bad were:
> LUCENE-5907: 3.x back compat code corrupts (not just can't read) your
index.
> LUCENE-5934: 3.x back compat code corrupts (not just can't read) your 4.0
index.
> LUCENE-5975: 3.x back compat code reports a false corruption (was
> indeed a bug in those versions of lucene) for 3.0-3.3 indexes.
>
> Whenever I see patterns in corruptions then I see it as a systemic
> problem and aggressively work to do something about it. I've seen
> several lately, but these are the relevant ones:
>
> 3.x back compat: 3.x didn't have a codec API, so its wedged in, and
> pretty hard. Its not that we were lazy, its that its radically
> different: doesn't separate data by fields, sorts terms differently,
> uses shared docstores, writes field numbers implicitly, ... We try to
> emulate it the best we can for testing, but the emulation can't really
> be perfect, so in such places: surprise, bugs. The only way to stop
> these corruptions is to stop supporting it.
>
> test infrastructure: IMO lucene 4 wasn't really ready to support
> multiple index formats from a test perspective, so we cheated and try
> to emulate old formats and rotate them across all tests. This works
> ok, but its horrible to debug (since
> these are essentially integration tests), the false failure rate is
> extremely high, and the complexity of the implementation is high. Its
> not just that it misses to find some bugs, it was actually directly
> responsible for corruption bugs like LUCENE-5377. But throughout 4.x,
> we have fixed the situation and added BaseXYZFormat tests for each
> part of an index format. Now we have reliable unit tests for each part
> of the abstract codec API: adding new tests here finds old bugs and
> prevents new ones in the future. For example I fixed several minor
> bugs in 4.x's CFS code just the last few days with this approach.
>
> there are also other patterns like deleting files, commit fallback
> logic, exception handling, addIndexes, etc that we have put
> substantial work into recently for 5.0. Whatever was safe to backport
> to bugfix releases, we tried, but some of these kinds of "fixes" are
> just too heavy for a bugfix branch, and many just cannot even be done
> as long as 3.x support exists. There is also some hardening in the 5.0
> index format itself that really could not happen correctly as long as
> we must support 3.x.
>
> So its not just that 3.x causes corruption bugs, it prevents us from
> moving forward and actually tackling these other issues. This is
> important to do or we will just continue to "tread water" and not
> actually get ahead of them. So I did something about it and created a
> 5.x branch. Worse case, nobody would follow along, but I guess I just
> assumed the situation was widely understood.
>
>>
>> Open questions: What is Heliosearch up to, and what are Elasticsearch’s
>> intentions?
>>
>
> I don't see how this is relevant. The

Re: 5.0 release status?

2014-10-04 Thread Jack Krupansky
Thanks for the further clarification. In short, the legacy of 3.x support 
was destabilizing 4.x itself (including testing), not just interfering with 
6.x moving forward beyond 3.x index compatibility. So, 5.x will have less 
baggage holding it down than 4.x has today.


I still need answers to:

1. Will users of 5.0 get any immediate benefit by reindexing or otherwise 
"upgrading" their 4.x indexes to 5.0?


2. What is the easiest, most efficient way for users of 5.0 to upgrade their 
4.x indexes to 5.0 so that they will not have to worry or do anything when 
6.0 comes out?


-- Jack Krupansky

-Original Message- 
From: Robert Muir

Sent: Saturday, October 4, 2014 10:43 PM
To: dev@lucene.apache.org
Subject: Re: 5.0 release status?

On Sat, Oct 4, 2014 at 12:35 PM, Jack Krupansky  
wrote:

I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT
there was no explicit decision or even implication that a release 5.0 
would

be imminent or that there would not be a 4.11 release. AFAICT, the whole
trunk 6/branch 5x decision was more related to wanting to have a trunk 
that

eliminated the 4x deprecations and was no longer constrained by
compatibility with the 4x index – let me know if I am wrong about that in
any way! But I did see a comment on one Jira referring to “preparation for 
a
5.0 release”, so I wanted to inquire about intentions. So, is a 5.0 
release

“coming soon”, or are 4.11, 4.12, 4.13... equally likely?


I created a branch_5x because 3.x index support was responsible for
multiple recent corruption bugs, some of which starting impacting 4.x
indexes.

Especially bad were:
LUCENE-5907: 3.x back compat code corrupts (not just can't read) your index.
LUCENE-5934: 3.x back compat code corrupts (not just can't read) your 4.0 
index.

LUCENE-5975: 3.x back compat code reports a false corruption (was
indeed a bug in those versions of lucene) for 3.0-3.3 indexes.

Whenever I see patterns in corruptions then I see it as a systemic
problem and aggressively work to do something about it. I've seen
several lately, but these are the relevant ones:

3.x back compat: 3.x didn't have a codec API, so its wedged in, and
pretty hard. Its not that we were lazy, its that its radically
different: doesn't separate data by fields, sorts terms differently,
uses shared docstores, writes field numbers implicitly, ... We try to
emulate it the best we can for testing, but the emulation can't really
be perfect, so in such places: surprise, bugs. The only way to stop
these corruptions is to stop supporting it.

test infrastructure: IMO lucene 4 wasn't really ready to support
multiple index formats from a test perspective, so we cheated and try
to emulate old formats and rotate them across all tests. This works
ok, but its horrible to debug (since
these are essentially integration tests), the false failure rate is
extremely high, and the complexity of the implementation is high. Its
not just that it misses to find some bugs, it was actually directly
responsible for corruption bugs like LUCENE-5377. But throughout 4.x,
we have fixed the situation and added BaseXYZFormat tests for each
part of an index format. Now we have reliable unit tests for each part
of the abstract codec API: adding new tests here finds old bugs and
prevents new ones in the future. For example I fixed several minor
bugs in 4.x's CFS code just the last few days with this approach.

there are also other patterns like deleting files, commit fallback
logic, exception handling, addIndexes, etc that we have put
substantial work into recently for 5.0. Whatever was safe to backport
to bugfix releases, we tried, but some of these kinds of "fixes" are
just too heavy for a bugfix branch, and many just cannot even be done
as long as 3.x support exists. There is also some hardening in the 5.0
index format itself that really could not happen correctly as long as
we must support 3.x.

So its not just that 3.x causes corruption bugs, it prevents us from
moving forward and actually tackling these other issues. This is
important to do or we will just continue to "tread water" and not
actually get ahead of them. So I did something about it and created a
5.x branch. Worse case, nobody would follow along, but I guess I just
assumed the situation was widely understood.



Open questions: What is Heliosearch up to, and what are Elasticsearch’s
intentions?



I don't see how this is relevant. The straw the broke the camel's back
for me was LUCENE-5934, and it doesn't impact elasticsearch.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #723: POMs out of sync

2014-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/723/

6 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([84FB566655EC1260:51DD87E22B3725C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)


REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://
























Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([6CCA804E81F6FD0:B1E1BC9C2F188909]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 

[jira] [Created] (LUCENE-5988) Tighten up IW's CFS codepath

2014-10-04 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5988:
---

 Summary: Tighten up IW's CFS codepath
 Key: LUCENE-5988
 URL: https://issues.apache.org/jira/browse/LUCENE-5988
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


I wanted to tackle this really in LUCENE-5969, but I found dragons here.

* the handling of si.files() logic as it relates to compound files is 
inconsistent. For instance, flush passes trackingdirectorywrapper and has some 
logic, but merge/addindexes pass the raw directory and track things 
differently. Ideally we would just use trackingwrapper consistently, and remove 
CompoundFormat.files().
* merge exception handling is scary: it manually snipes CFS files with 
indexfiledeleter when exceptions happen, which scares me a lot. I can also "see 
things that look like bugs" in this code. Maybe we can clean this up 
(especially if si.files is no longer crazy) and just somehow do a 
ifd.refresh(newseg) in all cases? Somewhat related is LUCENE-5987 but this 
would be a simpler step.
* the timing around setting useCFS boolean is really awkward, e.g. the codec 
will see false when writing CFS files.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40-ea-b04) - Build # 4354 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4354/
Java: 32bit/jdk1.8.0_40-ea-b04 -server -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([35C691C0BDEDB058:B4201FD8CAB2D064]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.ja

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11234 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
 1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
group=TGRP-HttpPartitionTest] at 
java.net.SocketInputStream.socketRead0(Native Method) at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
java.net.SocketInputStream.read(SocketInputStream.java:170) at 
java.net.SocketInputStream.read(SocketInputStream.java:141) at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84) 
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
 at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
 at org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)  
   at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.D

Re: 5.0 release status?

2014-10-04 Thread Robert Muir
On Sat, Oct 4, 2014 at 12:35 PM, Jack Krupansky  wrote:
> I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT
> there was no explicit decision or even implication that a release 5.0 would
> be imminent or that there would not be a 4.11 release. AFAICT, the whole
> trunk 6/branch 5x decision was more related to wanting to have a trunk that
> eliminated the 4x deprecations and was no longer constrained by
> compatibility with the 4x index – let me know if I am wrong about that in
> any way! But I did see a comment on one Jira referring to “preparation for a
> 5.0 release”, so I wanted to inquire about intentions. So, is a 5.0 release
> “coming soon”, or are 4.11, 4.12, 4.13... equally likely?

I created a branch_5x because 3.x index support was responsible for
multiple recent corruption bugs, some of which starting impacting 4.x
indexes.

Especially bad were:
LUCENE-5907: 3.x back compat code corrupts (not just can't read) your index.
LUCENE-5934: 3.x back compat code corrupts (not just can't read) your 4.0 index.
LUCENE-5975: 3.x back compat code reports a false corruption (was
indeed a bug in those versions of lucene) for 3.0-3.3 indexes.

Whenever I see patterns in corruptions then I see it as a systemic
problem and aggressively work to do something about it. I've seen
several lately, but these are the relevant ones:

3.x back compat: 3.x didn't have a codec API, so its wedged in, and
pretty hard. Its not that we were lazy, its that its radically
different: doesn't separate data by fields, sorts terms differently,
uses shared docstores, writes field numbers implicitly, ... We try to
emulate it the best we can for testing, but the emulation can't really
be perfect, so in such places: surprise, bugs. The only way to stop
these corruptions is to stop supporting it.

test infrastructure: IMO lucene 4 wasn't really ready to support
multiple index formats from a test perspective, so we cheated and try
to emulate old formats and rotate them across all tests. This works
ok, but its horrible to debug (since
these are essentially integration tests), the false failure rate is
extremely high, and the complexity of the implementation is high. Its
not just that it misses to find some bugs, it was actually directly
responsible for corruption bugs like LUCENE-5377. But throughout 4.x,
we have fixed the situation and added BaseXYZFormat tests for each
part of an index format. Now we have reliable unit tests for each part
of the abstract codec API: adding new tests here finds old bugs and
prevents new ones in the future. For example I fixed several minor
bugs in 4.x's CFS code just the last few days with this approach.

there are also other patterns like deleting files, commit fallback
logic, exception handling, addIndexes, etc that we have put
substantial work into recently for 5.0. Whatever was safe to backport
to bugfix releases, we tried, but some of these kinds of "fixes" are
just too heavy for a bugfix branch, and many just cannot even be done
as long as 3.x support exists. There is also some hardening in the 5.0
index format itself that really could not happen correctly as long as
we must support 3.x.

So its not just that 3.x causes corruption bugs, it prevents us from
moving forward and actually tackling these other issues. This is
important to do or we will just continue to "tread water" and not
actually get ahead of them. So I did something about it and created a
5.x branch. Worse case, nobody would follow along, but I guess I just
assumed the situation was widely understood.

>
> Open questions: What is Heliosearch up to, and what are Elasticsearch’s
> intentions?
>

I don't see how this is relevant. The straw the broke the camel's back
for me was LUCENE-5934, and it doesn't impact elasticsearch.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.0 release status?

2014-10-04 Thread Jack Krupansky
Thanks for the clarification! I do indeed recall now that portion of the 
discussion about renaming of branch_4x to branch_5x with a lot/most of what had 
previously been trunk, with the most major exception being the trunk war/server 
changes.

To make the long story short, the next non-patch release of Lucene and Solr 
will be... 5.0, not 4.11. So, 5.0 should be out, like within the next couple of 
months.

In terms of the impact on anybody for compatibility, the only big thing is that 
5.0 will not support 3.x indexes. It will fully support the 4.x indexes though, 
correct? Will there be any benefit or reason for people to upgrade their 4.x 
indexes to 5.0? One reason I can think of is so that they will be able to jump 
from 5.x to 6.0, otherwise 6.0 would refuse to accept their 4.x indexes. Can a 
4.x index be easily upgraded to be a 5.x index, like using a utility or 
optimize?

Do I have everything straight now?

-- Jack Krupansky

From: Ryan Ernst 
Sent: Saturday, October 4, 2014 3:57 PM
To: dev@lucene.apache.org 
Subject: Re: 5.0 release status?

The branch_5x effort is to release what would have been 4.11 as 5.0.  The most 
notable reason being backcompat for 3x indexes, which as Robert has put it is 
"unmaintainable".   

  AFAICT, there isn’t anything super major in 5x that the world is 
super-urgently waiting for (WAR vs. server?)

The WAR removal was not backported to 5x.  It is still on trunk, to be dealt 
with at a later time.

   Otherwise, it seems like we can continue to look at an ongoing stream of 
significant improvements to the 4x branch and that a 5.0 is probably at least a 
year or so off

I don't believe this is correct.  The intent here is to have the next release 
of Lucene be 5.0.  Robert has put in a great deal of effort in making 
improvements in a new Lucene50 codec that were simply not possible on 4x.

  or simply waiting on some major change that actually warrants a 5.0.

There are already some major changes in 5.0: nio2, tons more index corruption 
protection, super improved debugging for memory allocation of index structures, 
simpler tokenizer/analyzer interface without Reader, ram usage improvements 
with the 50 codec work so far. 

I know I have a list of things I'd like to do API-wise. IMO, a few months, 
maybe more. 

On Sat, Oct 4, 2014 at 9:35 AM, Jack Krupansky  wrote:

  I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT 
there was no explicit decision or even implication that a release 5.0 would be 
imminent or that there would not be a 4.11 release. AFAICT, the whole trunk 
6/branch 5x decision was more related to wanting to have a trunk that 
eliminated the 4x deprecations and was no longer constrained by compatibility 
with the 4x index – let me know if I am wrong about that in any way! But I did 
see a comment on one Jira referring to “preparation for a 5.0 release”, so I 
wanted to inquire about intentions. So, is a 5.0 release “coming soon”, or are 
4.11, 4.12, 4.13... equally likely?

  AFAICT, there isn’t anything super major in 5x that the world is 
super-urgently waiting for (WAR vs. server?), and people have been really good 
at making substantial enhancements in the 4x branch, so I would suggest that 
anybody strongly favoring an imminent 5.0 release (next six months) should make 
their case more explicitly. Otherwise, it seems like we can continue to look at 
an ongoing stream of significant improvements to the 4x branch and that a 5.0 
is probably at least a year or so off – or simply waiting on some major change 
that actually warrants a 5.0.

  Open questions: What is Heliosearch up to, and what are Elasticsearch’s 
intentions?

  Comments?

  -- Jack Krupansky


[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_67) - Build # 11387 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11387/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([BE29993E917A2371:9048DA6567DC5A8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.r

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1827 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1827/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

5 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([60456FADE3C0C372:D7687B3524C725AB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b28) - Build # 11233 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11233/
Java: 32bit/jdk1.9.0-ea-b28 -server -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([1B12087177D4DD82:AC3F1CE9B0D33B5B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.r

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11386 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11386/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
 1) Thread[id=6998, name=Thread-2355, state=RUNNABLE, 
group=TGRP-HttpPartitionTest] at 
java.net.SocketInputStream.socketRead0(Native Method) at 
java.net.SocketInputStream.read(SocketInputStream.java:150) at 
java.net.SocketInputStream.read(SocketInputStream.java:121) at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84) 
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
 at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
 at org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)  
   at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=6998, name=Thread-2355, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1867 - Still Failing!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1867/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
REGRESSION:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:49871/z_

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:49871/z_
at 
__randomizedtesting.SeedInfo.seed([DB97933C1D544E6:8C5FF72BB68A24DA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:582)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:151)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.uti

[jira] [Updated] (LUCENE-5987) Make indexwriter a mere mortal when exceptions strike

2014-10-04 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5987:

Description: 
IndexWriter's exception handling is overly complicated. Every method in general 
reads like this:

{code}
try {
  try {
try { 
 ...
 // lock order: COMPLICATED
 synchronized(this or that) {
 }
 ...
   } finally {
 if (!success5) {
   deleter.deleteThisFileOrThat();
 }
...
  }
}
{code}

Part of the problem is it acts like its an invincible superhero, e.g. can take 
a disk full on merge or flush to the face and just keep on trucking, and you 
can somehow fix the root cause and then just go about making commits on the 
same instance.

But we have a hard enough time ensuring exceptions dont do the wrong thing 
(e.g. cause corruption), and I don't think we really test this crazy behavior 
anywhere: e.g. making commits AFTER hitting disk full and so on.

It would probably be simpler if when such things happen, IW just considered 
them "tragic" just like OOM and rolled itself back, instead of doing all kinds 
of really scary stuff to try to "keep itself healthy" (like the little dance it 
plays with IFD in mergeMiddle manually deleting CFS files).

Besides, without something like a WAL, Indexwriter isn't really fit to be a 
superhero anyway: it can't prevent you from losing data in such situations. It 
just doesn't have the right tools for the job.

edit: just to be clear I am referring to abort (low level exception during 
flush) and exceptions during merge. For simple non-aborting cases like analyzer 
errors, of course we can deal with this. We already made great progress on 
turning a lot of BS exceptions that would cause aborts into non-aborting ones 
recently.

  was:
IndexWriter's exception handling is overly complicated. Every method in general 
reads like this:

{code}
try {
  try {
try { 
 ...
 // lock order: COMPLICATED
 synchronized(this or that) {
 }
 ...
   } finally {
 if (!success5) {
   deleter.deleteThisFileOrThat();
 }
...
  }
}
{code}

Part of the problem is it acts like its an invincible superhero, e.g. can take 
a disk full on merge or flush to the face and just keep on trucking, and you 
can somehow fix the root cause and then just go about making commits on the 
same instance.

But we have a hard enough time ensuring exceptions dont do the wrong thing 
(e.g. cause corruption), and I don't think we really test this crazy behavior 
anywhere: e.g. making commits AFTER hitting disk full and so on.

It would probably be simpler if when such things happen, IW just considered 
them "tragic" just like OOM and rolled itself back, instead of doing all kinds 
of really scary stuff to try to "keep itself healthy" (like the little dance it 
plays with IFD in maybeMerge manually deleting CFS files).

Besides, without something like a WAL, Indexwriter isn't really fit to be a 
superhero anyway: it can't prevent you from losing data in such situations. It 
just doesn't have the right tools for the job.


> Make indexwriter a mere mortal when exceptions strike
> -
>
> Key: LUCENE-5987
> URL: https://issues.apache.org/jira/browse/LUCENE-5987
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>
> IndexWriter's exception handling is overly complicated. Every method in 
> general reads like this:
> {code}
> try {
>   try {
> try { 
>  ...
>  // lock order: COMPLICATED
>  synchronized(this or that) {
>  }
>  ...
>} finally {
>  if (!success5) {
>deleter.deleteThisFileOrThat();
>  }
> ...
>   }
> }
> {code}
> Part of the problem is it acts like its an invincible superhero, e.g. can 
> take a disk full on merge or flush to the face and just keep on trucking, and 
> you can somehow fix the root cause and then just go about making commits on 
> the same instance.
> But we have a hard enough time ensuring exceptions dont do the wrong thing 
> (e.g. cause corruption), and I don't think we really test this crazy behavior 
> anywhere: e.g. making commits AFTER hitting disk full and so on.
> It would probably be simpler if when such things happen, IW just considered 
> them "tragic" just like OOM and rolled itself back, instead of doing all 
> kinds of really scary stuff to try to "keep itself healthy" (like the little 
> dance it plays with IFD in mergeMiddle manually deleting CFS files).
> Besides, without something like a WAL, Indexwriter isn't really fit to be a 
> superhero anyway: it can't prevent you from losing data in such situations. 
> It just doesn't have the right tools for the job.
> edit: just to be clear I am referring to abort (low level exception during 
> flush) and exceptions during merge. For simple non-aborting cases like 
> analyzer 

[jira] [Created] (LUCENE-5987) Make indexwriter a mere mortal when exceptions strike

2014-10-04 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5987:
---

 Summary: Make indexwriter a mere mortal when exceptions strike
 Key: LUCENE-5987
 URL: https://issues.apache.org/jira/browse/LUCENE-5987
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


IndexWriter's exception handling is overly complicated. Every method in general 
reads like this:

{code}
try {
  try {
try { 
 ...
 // lock order: COMPLICATED
 synchronized(this or that) {
 }
 ...
   } finally {
 if (!success5) {
   deleter.deleteThisFileOrThat();
 }
...
  }
}
{code}

Part of the problem is it acts like its an invincible superhero, e.g. can take 
a disk full on merge or flush to the face and just keep on trucking, and you 
can somehow fix the root cause and then just go about making commits on the 
same instance.

But we have a hard enough time ensuring exceptions dont do the wrong thing 
(e.g. cause corruption), and I don't think we really test this crazy behavior 
anywhere: e.g. making commits AFTER hitting disk full and so on.

It would probably be simpler if when such things happen, IW just considered 
them "tragic" just like OOM and rolled itself back, instead of doing all kinds 
of really scary stuff to try to "keep itself healthy" (like the little dance it 
plays with IFD in maybeMerge manually deleting CFS files).

Besides, without something like a WAL, Indexwriter isn't really fit to be a 
superhero anyway: it can't prevent you from losing data in such situations. It 
just doesn't have the right tools for the job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6588) Combination of nested documents and incremental partial update on int field does not work

2014-10-04 Thread Ali Nzm (JIRA)
Ali Nzm created SOLR-6588:
-

 Summary: Combination of nested documents and incremental partial 
update on int field does not work
 Key: SOLR-6588
 URL: https://issues.apache.org/jira/browse/SOLR-6588
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10, 4.9
Reporter: Ali Nzm


When you are facing with adding nested documents and incremental partial update 
(for int field) for same solr document the nested part will not work. This 
problem exists on both 4.9 and 4.10 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11232 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11232/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testExampleConfig

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: /solr/admin/info/system   
HTTP ERROR: 404 Problem accessing /solr/admin/info/system. Reason: 
Can not find: /solr/admin/info/system Powered by Jetty:// 











  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: /solr/admin/info/system


HTTP ERROR: 404
Problem accessing /solr/admin/info/system. Reason:
Can not find: /solr/admin/info/system
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([5AEE7A45069E21BD:EDC36EDDC199C764]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:530)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExampleConfig(SolrExampleTests.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carro

Re: 5.0 release status?

2014-10-04 Thread Ryan Ernst
The branch_5x effort is to release what would have been 4.11 as 5.0.  The
most notable reason being backcompat for 3x indexes, which as Robert has
put it is "unmaintainable".

AFAICT, there isn’t anything super major in 5x that the world is
> super-urgently waiting for (WAR vs. server?)


The WAR removal was not backported to 5x.  It is still on trunk, to be
dealt with at a later time.

 Otherwise, it seems like we can continue to look at an ongoing stream of
> significant improvements to the 4x branch and that a 5.0 is probably at
> least a year or so off


I don't believe this is correct.  The intent here is to have the next
release of Lucene be 5.0.  Robert has put in a great deal of effort in
making improvements in a new Lucene50 codec that were simply not possible
on 4x.

or simply waiting on some major change that actually warrants a 5.0.


There are already some major changes in 5.0: nio2, tons more index
corruption protection, super improved debugging for memory allocation of
index structures, simpler tokenizer/analyzer interface without Reader, ram
usage improvements with the 50 codec work so far.

I know I have a list of things I'd like to do API-wise. IMO, a few months,
maybe more.

On Sat, Oct 4, 2014 at 9:35 AM, Jack Krupansky 
wrote:

>   I tried to follow all of the trunk 6/branch 5x discussion, but...
> AFAICT there was no explicit decision or even implication that a release
> 5.0 would be imminent or that there would not be a 4.11 release. AFAICT,
> the whole trunk 6/branch 5x decision was more related to wanting to have a
> trunk that eliminated the 4x deprecations and was no longer constrained by
> compatibility with the 4x index – let me know if I am wrong about that in
> any way! But I did see a comment on one Jira referring to “preparation for
> a 5.0 release”, so I wanted to inquire about intentions. So, is a 5.0
> release “coming soon”, or are 4.11, 4.12, 4.13... equally likely?
>
> AFAICT, there isn’t anything super major in 5x that the world is
> super-urgently waiting for (WAR vs. server?), and people have been really
> good at making substantial enhancements in the 4x branch, so I would
> suggest that anybody strongly favoring an imminent 5.0 release (next six
> months) should make their case more explicitly. Otherwise, it seems like we
> can continue to look at an ongoing stream of significant improvements to
> the 4x branch and that a 5.0 is probably at least a year or so off – or
> simply waiting on some major change that actually warrants a 5.0.
>
> Open questions: What is Heliosearch up to, and what are Elasticsearch’s
> intentions?
>
> Comments?
>
> -- Jack Krupansky
>


Re: 5.0 release status?

2014-10-04 Thread Shawn Heisey
On 10/4/2014 10:35 AM, Jack Krupansky wrote:
> I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT
> there was no explicit decision or even implication that a release 5.0
> would be imminent or that there would not be a 4.11 release. AFAICT, the
> whole trunk 6/branch 5x decision was more related to wanting to have a
> trunk that eliminated the 4x deprecations and was no longer constrained
> by compatibility with the 4x index – let me know if I am wrong about
> that in any way! But I did see a comment on one Jira referring to
> “preparation for a 5.0 release”, so I wanted to inquire about
> intentions. So, is a 5.0 release “coming soon”, or are 4.11, 4.12,
> 4.13... equally likely?
>  
> AFAICT, there isn’t anything super major in 5x that the world is
> super-urgently waiting for (WAR vs. server?), and people have been
> really good at making substantial enhancements in the 4x branch, so I
> would suggest that anybody strongly favoring an imminent 5.0 release
> (next six months) should make their case more explicitly. Otherwise, it
> seems like we can continue to look at an ongoing stream of significant
> improvements to the 4x branch and that a 5.0 is probably at least a year
> or so off – or simply waiting on some major change that actually
> warrants a 5.0.
>  
> Open questions: What is Heliosearch up to, and what are Elasticsearch’s
> intentions?

I think you're right when you say that freeing trunk from compatibility
hell is a primary goal.

In SVN, branch_4x has been eliminated and branch_5x now exists.  We took
a roundabout path -- if I grok it correctly, branch_4x was renamed to
branch_5x and large-scale code changes were backported from trunk.  That
must have been quite a job, so many thanks to Robert for that effort.

I think that any further 4.x releases will only be point releases for
bugfixes on 4.10.  We currently don't have an easy way to build a new
4.x release, so the next feature release will be 5.0.

At this moment, branch_5x builds a war, not a server application.  I'm
still interested in changing that, and I believe that is the plan, but
as far as I know, no actual work has been done on the transition.  That
work is likely to take a while to become stable, so a timely 5.0 release
required restoring the war to 5x.

I am fairly sure the work for a standalone Solr server will happen on
trunk, and if the changes aren't extraordinarily drastic, we can port
the alternate build target to 5.x, and make it the default build target
in a later release.  Since 5.0 will still build a .war file, we probably
need to make a servlet version available for all 5.x releases.  Stay
tuned for info on how that gets managed, because I have no idea. :)
Perhaps breaking up the download into smaller bits can happen on the 5x
branch.

What I've seen from Heliosearch looks really awesome, though I haven't
actually tried it yet.  I'd like to see where that goes.  GC pauses can
be a big problem, so reducing the amount of memory that requires GC is a
great goal.  For elasticsearch, I have zero information.

We probably won't get 5.0 out the door before the end of the year, but
it would be awesome if we did.  Hopefully it won't take six months,
though that wouldn't surprise me.  I'm doing what I can for the cause,
by running a larger test suite than normal.  We've got some insane
resource requirements for some of our non-default tests!  The "@Monster"
designation is fitting.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-04 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Addressing some comments. Remove unused for-loop and CommonParams.STATS. Didn't 
touch the notSupprted test methods, will let Vitaliy a chance to speak for 
their usefulness. Also reverted the hasValues logic to replace it with checking 
if current pivot has positive count. Although it does produce some stats 
entries with Infinity minimum/maximum and NaN mean. This is what I was asking 
about before, I think I misunderstood the answer, but it still seems 
error-prone to have such entries...

Finally, I updated some of the outputs to use NamedList instead of maps so that 
solrj binary works better. Did have to sort fields in QueryResponse to get 
tests to pass. Not sure this is the best way, but would sometimes get them out 
of order if I didn't.

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-6587.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

> Misleading exception when creating collections in SolrCloud with bad 
> configuration
> --
>
> Key: SOLR-6587
> URL: https://issues.apache.org/jira/browse/SOLR-6587
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.1, 5.0, Trunk
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6587.patch
>
>
> I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
> collection and I was getting: 
> {noformat}
> ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
> creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
> core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
> support getConfigDir() - likely, what you are trying to do is not supported 
> in ZooKeeper mode
> org.apache.solr.common.SolrException: Could not load conf for core 
> tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
> getConfigDir() - likely, what you are trying to do is not supported in 
> ZooKeeper mode
> at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.cloud.ZooKeeperExc

[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159239#comment-14159239
 ] 

ASF subversion and git services commented on SOLR-6585:
---

Commit 1629437 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1629437 ]

SOLR-6585

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6585.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159234#comment-14159234
 ] 

ASF subversion and git services commented on SOLR-6585:
---

Commit 1629434 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1629434 ]

SOLR-6585

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159231#comment-14159231
 ] 

ASF subversion and git services commented on SOLR-6585:
---

Commit 1629433 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1629433 ]

SOLR-6585

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159220#comment-14159220
 ] 

Noble Paul edited comment on SOLR-6585 at 10/4/14 5:27 PM:
---

bq.will it work always?

yes. can you think of a case where it would not work?


was (Author: noble.paul):
bq.will it work always?

yes

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159220#comment-14159220
 ] 

Noble Paul commented on SOLR-6585:
--

bq.will it work always?

yes

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159219#comment-14159219
 ] 

Mikhail Khludnev commented on SOLR-6585:


hm.. it seems like 
{code}
SolrRequestParsers.parse(SolrCore, String, HttpServletRequest)
   sreq.getContext().put( "path", path );
{code}

will it work always?



> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159217#comment-14159217
 ] 

Mikhail Khludnev commented on SOLR-6585:


Sounds great! How to get the actual request path during handling? 


> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-10-04 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159212#comment-14159212
 ] 

Steve Davids commented on SOLR-5986:


Why wouldn't it return partial results? When sending a distributed request if 
all but one return results but one shard lags behind at query expansion one 
would think that you would get the appropriate partial results message. Unless 
this is partially related to SOLR-6496 which would retry a different replica in 
the shard group and thus *could* cause a timeout at the Solr distributed 
aggregation layer.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6347) 'deletereplica' can throw a NullPointerException

2014-10-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159211#comment-14159211
 ] 

Noble Paul commented on SOLR-6347:
--

makes sense [~anshumg] 



> 'deletereplica' can throw a NullPointerException
> 
>
> Key: SOLR-6347
> URL: https://issues.apache.org/jira/browse/SOLR-6347
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Ralph Tice
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-6347.patch, SOLR-6347.patch
>
>
> Occasionally, but not always, when I invoke DELETEREPLICA I get a NPE.  I 
> suspect it is a race condition when the core finishes deleting while the 
> overseer is checking for it?
> Client response:
> curl 
> "http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=mycollection&shard=tmp_shard&replica=core_node1";
> 
> 
> 500 name="QTime">3712 name="responseHeader">0 name="QTime">27java.lang.NullPointerException:java.lang.NullPointerException  name="exception">-1 name="error">org.apache.solr.common.SolrException
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRemoveReplica(CollectionsHandler.java:494)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:184)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:744)
> 500
> 
> Server log:
> 21:06:05.368 [OverseerThreadFactory-6-thread-5] WARN  
> o.a.s.c.OverseerCollectionProcessor - 
> OverseerCollectionProcessor.processMessage : deletereplica , {
>   "operation":"deletereplica",
>   "collection":"mycollection",
>   "shard":"tmp_shard",
>   "replica":"core_node1"}
> 21:06:05.602 [OverseerThreadFactory-6-thread-5] ERROR 
> o.a.s.c.OverseerCollectionProcessor - Collection deletereplica of 
> de

[jira] [Updated] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5969:

Attachment: LUCENE-5969_part2.patch

I think the branch is currently in a good state to do an intermediate merge. 
Then we can tackle postings and docvalues.
This patch can be applied, but its large because of lots of svn moves.

* All per-segment files are moved to write/checkSegmentHeader , and they also 
verify segment suffix/generation to fully detect mismatched files. I fixed all 
5.0 (except dv/postings, still TODO) and all of codecs/ to do this.
* All 5.0 init methods (except dv/postings, and a couple guys in codecs/: still 
TODO) use the new checkFooter(in, Throwable) to append suppressed checksum 
status if they hit corruption on open.
* CFS is moved to the codec API, with a write method that handles all files at 
once, and a read method that returns read-only directory view. Added a new 
simpler impl for 5.0, and a simpletext impl. Moved all CFS tests to 
BaseCompoundFormatTestCase which they all use. SegmentReader no longer opens 
the CFS file twice.
* Merging uses codec producer APIs instead of readers. This leads to more 
optimized merging: checksum computation is per-segment/per-producer, and norms 
and docvalues don't pile up unused fields into RAM during merge. If the fields 
are already loaded, they use them, but otherwise they load the field, but don't 
cache it. This is important not just for "abuse" cases, but should really 
improve use cases like offline indexing. I fixed all codecs (5.0, codecs/, 
backwards/) to not waste RAM like this.
* 5.0 norms have a new indirect encoding for sparse fields. Currently this is 
very conservative as 1/31 to ensure its more efficient in terms of both space 
(maximum possible packedints bloat) and time (v log v < maxdoc). 
* Backwards codecs are more contained: I tried to reduce visibility, make them 
as read-only as possible, ensure all files are deprecated, etc.


> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
> LUCENE-5969_part2.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6347) 'deletereplica' can throw a NullPointerException

2014-10-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159201#comment-14159201
 ] 

Anshum Gupta edited comment on SOLR-6347 at 10/4/14 4:45 PM:
-

[~shalinmangar] I think the test didn't fail until the 4x -> 5x and trunk 
changes happened (or something else that was committed at around the same time) 
. Something changed that made this and DeleteReplicaTest fail consistently. 
I'll try and have a look at it. Also, this is in the CHANGE list for 4.10, 
should we update that here?

Also, I think it'd be good to create another issue to handle the failing 
Delete*ReplicaTest failures.


was (Author: anshumg):
[~shalinmangar] I think the test didn't fail until the 4x -> 5x and trunk 
changes happened (or something else that was committed at around the same time) 
. Something changed that made this and DeleteReplicaTest fail consistently. 
I'll try and have a look at it. Also, this is in the CHANGE list for 4.10, 
should we update that here?

> 'deletereplica' can throw a NullPointerException
> 
>
> Key: SOLR-6347
> URL: https://issues.apache.org/jira/browse/SOLR-6347
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Ralph Tice
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-6347.patch, SOLR-6347.patch
>
>
> Occasionally, but not always, when I invoke DELETEREPLICA I get a NPE.  I 
> suspect it is a race condition when the core finishes deleting while the 
> overseer is checking for it?
> Client response:
> curl 
> "http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=mycollection&shard=tmp_shard&replica=core_node1";
> 
> 
> 500 name="QTime">3712 name="responseHeader">0 name="QTime">27java.lang.NullPointerException:java.lang.NullPointerException  name="exception">-1 name="error">org.apache.solr.common.SolrException
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRemoveReplica(CollectionsHandler.java:494)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:184)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.BlockingHttpConnecti

[jira] [Commented] (SOLR-6347) 'deletereplica' can throw a NullPointerException

2014-10-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159201#comment-14159201
 ] 

Anshum Gupta commented on SOLR-6347:


[~shalinmangar] I think the test didn't fail until the 4x -> 5x and trunk 
changes happened (or something else that was committed at around the same time) 
. Something changed that made this and DeleteReplicaTest fail consistently. 
I'll try and have a look at it. Also, this is in the CHANGE list for 4.10, 
should we update that here?

> 'deletereplica' can throw a NullPointerException
> 
>
> Key: SOLR-6347
> URL: https://issues.apache.org/jira/browse/SOLR-6347
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Ralph Tice
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-6347.patch, SOLR-6347.patch
>
>
> Occasionally, but not always, when I invoke DELETEREPLICA I get a NPE.  I 
> suspect it is a race condition when the core finishes deleting while the 
> overseer is checking for it?
> Client response:
> curl 
> "http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=mycollection&shard=tmp_shard&replica=core_node1";
> 
> 
> 500 name="QTime">3712 name="responseHeader">0 name="QTime">27java.lang.NullPointerException:java.lang.NullPointerException  name="exception">-1 name="error">org.apache.solr.common.SolrException
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRemoveReplica(CollectionsHandler.java:494)
> at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:184)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:744)
> 500
> 
> Server log:
> 21:06:05.368 [OverseerThreadFactory-6-thread-5] WARN  
> o.a.s.c.OverseerCollectionProcessor - 

5.0 release status?

2014-10-04 Thread Jack Krupansky
I tried to follow all of the trunk 6/branch 5x discussion, but... AFAICT there 
was no explicit decision or even implication that a release 5.0 would be 
imminent or that there would not be a 4.11 release. AFAICT, the whole trunk 
6/branch 5x decision was more related to wanting to have a trunk that 
eliminated the 4x deprecations and was no longer constrained by compatibility 
with the 4x index – let me know if I am wrong about that in any way! But I did 
see a comment on one Jira referring to “preparation for a 5.0 release”, so I 
wanted to inquire about intentions. So, is a 5.0 release “coming soon”, or are 
4.11, 4.12, 4.13... equally likely?

AFAICT, there isn’t anything super major in 5x that the world is super-urgently 
waiting for (WAR vs. server?), and people have been really good at making 
substantial enhancements in the 4x branch, so I would suggest that anybody 
strongly favoring an imminent 5.0 release (next six months) should make their 
case more explicitly. Otherwise, it seems like we can continue to look at an 
ongoing stream of significant improvements to the 4x branch and that a 5.0 is 
probably at least a year or so off – or simply waiting on some major change 
that actually warrants a 5.0.

Open questions: What is Heliosearch up to, and what are Elasticsearch’s 
intentions?

Comments?

-- Jack Krupansky

[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-10-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159197#comment-14159197
 ] 

Anshum Gupta commented on SOLR-5986:


Here's what I meant with that statement: This test should either be removed or 
modified to ensure that the timeAllowed is never hit during query expansion.

Here's more of the context:
Until 5986 was committed, the timeAllowed parameter was only used during the 
collection stage. That stage also supported returning of partial matches if 
some shards returned responses and didn't time out.

After this commit, the timeAllowed parameter could lead to early termination of 
a request way before the search actually happens i.e. during query expansion. 
At this stage, partial results aren't returned.

The current test tries to send a request assuming that the timeOut would happen 
*only* during the collection stage, leading to partial results being returned. 
BUT if, the request times out during query expansion, no partial results are 
returned, leading to a test failure.

I'll remove the partial results test. I'll also think about adding something to 
replace this (I certainly don't want coverage to go down but this test isn't 
really a valid case anymore). May be add something that uses caching to avoid 
query expansion but times out during doc collection.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159191#comment-14159191
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629408 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629408 ]

LUCENE-5969: add changes and test

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159179#comment-14159179
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629406 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629406 ]

LUCENE-5969: javadocs

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159175#comment-14159175
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629405 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629405 ]

LUCENE-5969: improve memory pf

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159172#comment-14159172
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629404 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629404 ]

LUCENE-5969: improved exceptions for ancient codec

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159150#comment-14159150
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629401 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629401 ]

LUCENE-5969: fix test to not rely upon filename count

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159148#comment-14159148
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629400 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629400 ]

LUCENE-5969: fix false fails from tests that look for exact file names

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b28) - Build # 11384 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11384/
Java: 32bit/jdk1.9.0-ea-b28 -client -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([2C5381EB0B9B5280:ADB50FF37CC432BC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carr

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159128#comment-14159128
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629397 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629397 ]

LUCENE-5969: add SimpleText cfs

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159104#comment-14159104
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629380 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629380 ]

LUCENE-5969: don't open the cfs file twice

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159069#comment-14159069
 ] 

Noble Paul edited comment on SOLR-6585 at 10/4/14 11:00 AM:


With a testcase. I'm committing this soon


was (Author: noble.paul):
With a testcase

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6585:
-
Attachment: SOLR-6585.patch

With a testcase

> Let a requestHandler handle sub paths as well
> -
>
> Key: SOLR-6585
> URL: https://issues.apache.org/jira/browse/SOLR-6585
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6585.patch, SOLR-6585.patch
>
>
> If a request handler is registered at /path , it should be able to handle 
> /path/a or /path/x/y if it chooses to without explicitly registering those 
> paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: tests.monster failing on branch_5x

2014-10-04 Thread Michael McCandless
OK I committed fixes for the monster tests.  They were all trying to
add Integer.MAX_VALUE docs, but should use IndexWriter.MAX_DOCS
instead.

I verified the change at least compiles, but didn't run all the monster tests...

Mike McCandless

http://blog.mikemccandless.com


On Sat, Oct 4, 2014 at 4:56 AM, Michael McCandless
 wrote:
> Hi Shawn,
>
> Thank you for running these!  We really ought to have a Jenkins job
> somewhere that does these weekly...
>
> I committed a fix for the Test2BTerms failure, just the annotation
> that exempts it from the "too must stuff printed to stdout" rule.
>
> The other failure is interesting: that "too many docs indexed" check
> is a recent check we added to IW (LUCENE-5843)... it's spooky that
> these tests are in fact doing so.  We need to fix them not to!  I'll
> dig ...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Sat, Oct 4, 2014 at 1:56 AM, Shawn Heisey  wrote:
>> I've been running an inclusive set of tests on branch_5x to do what I
>> can for the release effort.  It kept failing with OOME, so I kept
>> increasing the heap size. After trying 2GB and 3GB, I finally bumped it
>> all the way to 8GB and dropped the JVM count to 1, but that resulted in
>> different problems.  Here's the commandline that I used, followed by the
>> list of failures:
>>
>> ant -Dtests.jvms=1 -Dtests.heapsize=8g -Dtests.nightly=true
>> -Dtests.weekly=true -Dtests.monster=true clean test | tee ~/b5x-testlog.txt
>>
>>[junit4] Tests with failures:
>>[junit4]   - org.apache.lucene.index.Test2BTerms (suite)
>>[junit4]   - org.apache.lucene.index.Test2BNumericDocValues.testNumerics
>>[junit4]   - org.apache.lucene.index.Test2BNumericDocValues (suite)
>>[junit4]   -
>> org.apache.lucene.index.Test2BSortedDocValues.testFixedSorted
>>[junit4]   - org.apache.lucene.index.Test2BSortedDocValues.test2BOrds
>>[junit4]   - org.apache.lucene.index.Test2BSortedDocValues (suite)
>>[junit4]
>>[junit4]
>>[junit4] JVM J0: 0.90 .. 76575.00 = 76574.10s
>>[junit4] Execution time total: 21 hours 16 minutes 15 seconds
>>
>> All of them except for Test2BTerms failed because of this problem:
>>
>>[junit4]> Throwable #1: java.lang.IllegalStateException: number
>> of documents in the index cannot exceed 2147483519
>>
>> Test2BTerms failed for an entirely different reason:
>>
>>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=Test2BTerms
>> -Dtests.seed=9F2773FB226B1E02 -Dtests.nightly=true -Dtests.weekly=true
>> -Dtests.slow=true -Dtests.locale=es_PE
>> -Dtests.timezone=America/Los_Angeles -Dtests.file.encoding=UTF-8
>>[junit4] ERROR   0.00s | Test2BTerms (suite) <<<
>>[junit4]> Throwable #1: java.lang.AssertionError: The test or
>> suite printed 3012118 bytes to stdout and stderr, even though the limit
>> was set to 8192 bytes. Increase the limit with @Limit, ignore it
>> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>>[junit4]>at
>> __randomizedtesting.SeedInfo.seed([9F2773FB226B1E02]:0)
>>[junit4]>at java.lang.Thread.run(Thread.java:745)
>>
>> I'm clueless about how to fix the number of documents going too high.  I
>> could probably fix the other one, if someone can tell me what the
>> preferred fix is.
>>
>> I haven't tried this on the 4_10 branch, because it takes so long to
>> run.  I've started a similar commandline in branch_5x/solr to see what
>> happens.
>>
>> Thanks,
>> Shawn
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11229 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11229/
Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:58499/v, 
http://127.0.0.1:47523/v, http://127.0.0.1:42513/v]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:58499/v, http://127.0.0.1:47523/v, 
http://127.0.0.1:42513/v]
at 
__randomizedtesting.SeedInfo.seed([BCF9444568222608:3D1FCA5D1F7D4634]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(Test

Re: tests.monster failing on branch_5x

2014-10-04 Thread Michael McCandless
Hi Shawn,

Thank you for running these!  We really ought to have a Jenkins job
somewhere that does these weekly...

I committed a fix for the Test2BTerms failure, just the annotation
that exempts it from the "too must stuff printed to stdout" rule.

The other failure is interesting: that "too many docs indexed" check
is a recent check we added to IW (LUCENE-5843)... it's spooky that
these tests are in fact doing so.  We need to fix them not to!  I'll
dig ...

Mike McCandless

http://blog.mikemccandless.com


On Sat, Oct 4, 2014 at 1:56 AM, Shawn Heisey  wrote:
> I've been running an inclusive set of tests on branch_5x to do what I
> can for the release effort.  It kept failing with OOME, so I kept
> increasing the heap size. After trying 2GB and 3GB, I finally bumped it
> all the way to 8GB and dropped the JVM count to 1, but that resulted in
> different problems.  Here's the commandline that I used, followed by the
> list of failures:
>
> ant -Dtests.jvms=1 -Dtests.heapsize=8g -Dtests.nightly=true
> -Dtests.weekly=true -Dtests.monster=true clean test | tee ~/b5x-testlog.txt
>
>[junit4] Tests with failures:
>[junit4]   - org.apache.lucene.index.Test2BTerms (suite)
>[junit4]   - org.apache.lucene.index.Test2BNumericDocValues.testNumerics
>[junit4]   - org.apache.lucene.index.Test2BNumericDocValues (suite)
>[junit4]   -
> org.apache.lucene.index.Test2BSortedDocValues.testFixedSorted
>[junit4]   - org.apache.lucene.index.Test2BSortedDocValues.test2BOrds
>[junit4]   - org.apache.lucene.index.Test2BSortedDocValues (suite)
>[junit4]
>[junit4]
>[junit4] JVM J0: 0.90 .. 76575.00 = 76574.10s
>[junit4] Execution time total: 21 hours 16 minutes 15 seconds
>
> All of them except for Test2BTerms failed because of this problem:
>
>[junit4]> Throwable #1: java.lang.IllegalStateException: number
> of documents in the index cannot exceed 2147483519
>
> Test2BTerms failed for an entirely different reason:
>
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=Test2BTerms
> -Dtests.seed=9F2773FB226B1E02 -Dtests.nightly=true -Dtests.weekly=true
> -Dtests.slow=true -Dtests.locale=es_PE
> -Dtests.timezone=America/Los_Angeles -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.00s | Test2BTerms (suite) <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The test or
> suite printed 3012118 bytes to stdout and stderr, even though the limit
> was set to 8192 bytes. Increase the limit with @Limit, ignore it
> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>[junit4]>at
> __randomizedtesting.SeedInfo.seed([9F2773FB226B1E02]:0)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>
> I'm clueless about how to fix the number of documents going too high.  I
> could probably fix the other one, if someone can tell me what the
> preferred fix is.
>
> I haven't tried this on the 4_10 branch, because it takes so long to
> run.  I've started a similar commandline in branch_5x/solr to see what
> happens.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 647 - Still Failing

2014-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/647/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([8FE0961F3C8F3FF8:4E284B599DE9EE51]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:528)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability(TestLBHttpSolrServer.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1866 - Failure!

2014-10-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1866/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
 1) Thread[id=13455, name=Thread-5085, state=RUNNABLE, 
group=TGRP-HttpPartitionTest] at 
java.net.SocketInputStream.socketRead0(Native Method) at 
java.net.SocketInputStream.read(SocketInputStream.java:152) at 
java.net.SocketInputStream.read(SocketInputStream.java:122) at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84) 
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
 at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
 at org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)  
   at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=13455, name=Thread-5085, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDire