Re: Trunk tests fail (on Mac OSX)

2014-02-11 Thread Steve Davids
Thanks for the info, I was able to reproduce the problem and it seems to be 
isolated to using two-way SSL on OSX. Check out my comment on SOLR-3854 for 
additional information. @Mark Miller seems very responsive for applying patches 
for this issue and will hopefully get this resolved pretty soon.

-Steve

On Feb 11, 2014, at 4:02 PM, Per Steffensen  wrote:

> I ran BasicDistributedZk2Test
> * 5 times with SolrTestCaseJ4.ALLOW_SSL = false: 5 green runs
> * 5 times with SolrTestCaseJ4.ALLOW_SSL = true: 2 green and 3 red runs
> Do not know if that is statistics enough to conclude anything, but it smells 
> a little like a SSL issue, if you ask me.
> 
> Mac OSX Maverick (10.9.1)
> java -version 
> java version "1.7.0_51"
> Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
> 
> Regards, Per Steffensen
> 
> On 11/02/14 17:14, Steve Davids wrote:
>> A way to isolate if this is an SSL issue, you can turn off SSL for that 
>> specific test by adding:
>> static {
>> ALLOW_SSL = false;
>> }
>> to the test class. Although, I have seen that test error out every so often 
>> in non-SSL mode. Will take a look later on tonight though.
>> 
>> -Steve
>> 
>> 
>> On Tue, Feb 11, 2014 at 11:06 AM, Uwe Schindler  wrote:
>> Hi,
>> 
>> this looks like some bug in the MacOSX libc. Sometimes, Java 7 also crashes 
>> on OSX in this code part (because it segfaults when producing the error 
>> message). There is already a bug open at Oracle, but they have no idea how 
>> to fix. Currently it is impossible to run something like Tomcat or Jetty on 
>> an OSX server in production...
>> 
>> https://bugs.openjdk.java.net/browse/JDK-8024045
>> 
>> Uwe
>> 
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>> 
>> 
>> > -Original Message-
>> > From: Per Steffensen [mailto:st...@designware.dk]
>> > Sent: Tuesday, February 11, 2014 4:46 PM
>> > To: dev@lucene.apache.org
>> > Subject: Trunk tests fail (on Mac OSX)
>> >
>> > I am sure you have noticed it from Jenkins, but tests fail (most of the
>> > time) on trunk. E.g. I have been running BasicDistributedZk2Test numerous
>> > times from Eclipse on my Mac. Revision 1567049 on trunk.
>> > Sometimes the test is green, but most of the times it is not. It is random
>> > exactly where it fails, but there is an example below. I believe that when 
>> > it
>> > fails it is always with the kind of exception
>> > (java.net.SocketException: Invalid argument) shown at the bottom. My
>> > guess it that is has to do with SOLR-3854.
>> >
>> > Anyone working on fixing this? Any current knowledge about what the
>> > problem is? An estimate on when things are consistently green again on
>> > trunk?
>> >
>> > Regards, Per Steffensen
>> >
>> > - An example of where the test fails -
>> > org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
>> > available
>> > to handle this request:[https://127.0.0.1:58496/qls/z/collection1,
>> > https://127.0.0.1:58500/qls/z/collection1,
>> > https://127.0.0.1:58493/qls/z/collection1,
>> > https://127.0.0.1:58490/qls/z/collection1]
>> >  at
>> > __randomizedtesting.SeedInfo.seed([F0E9F84409868187:710F765C7ED9E1BB
>> > ]:0)
>> >  at
>> > org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.j
>> > ava:352)
>> >  at
>> > org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.jav
>> > a:635)
>> >  at
>> > org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.ja
>> > va:90)
>> >  at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
>> >  at
>> > org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFul
>> > lDistribZkTestBase.java:1356)
>> >  at
>> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
>> > hTestCase.java:561)
>> >  at
>> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
>> > hTestCase.java:543)
>> >  at
>> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
>> > hTestCase.java:522)
>> >  at
>> > org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractF
>> > ullDistribZkTestBase.java:754)
>> >  at
>> > org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Te
>> > st.java:107)
>> >  at
>> > org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistri
>> > butedSearchTestCase.java:868)
>> >  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >  at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
>> > ava:57)
>> >  at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>> > sorImpl.java:43)
>> >  at java.lang.reflect.Method.invoke(Method.java:606)
>> >  at
>> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
>> > 

[jira] [Updated] (SOLR-3854) SolrCloud does not work with https

2014-02-11 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-3854:
---

Attachment: SOLR-3854v3.patch

I was able to reproduce this problem in OSX in both BasicDistributedZk2Test and 
BasicDistributedZkTest when using two-way SSL by configuring "useSsl = true" 
and "useClientAuth = true". Once the useClientAuth was turned off it seems to 
consistently provide good test runs.

Here is the full test suite results:

{noformat:title=trySsl = true, trySslClientAuth = true}
   [junit4] Tests with failures:
   [junit4]   - org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch
   [junit4]   - org.apache.solr.TestDistributedSearch.testDistribSearch
   [junit4]   - org.apache.solr.cloud.DistribCursorPagingTest.testDistribSearch
   [junit4]   - org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch
   [junit4]   - org.apache.solr.TestDistributedGrouping.testDistribSearch
   [junit4]   - 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds
   [junit4] 
   [junit4] 
   [junit4] JVM J0: 2.58 ..   899.26 =   896.67s
   [junit4] JVM J1: 2.81 ..   899.14 =   896.33s
   [junit4] JVM J2: 2.58 ..   899.03 =   896.45s
   [junit4] JVM J3: 2.59 ..   899.27 =   896.69s
   [junit4] JVM J4: 2.58 ..   899.27 =   896.69s
   [junit4] JVM J5: 2.84 ..   899.09 =   896.25s
   [junit4] JVM J6: 2.80 ..   899.13 =   896.33s
   [junit4] JVM J7: 2.58 ..   899.07 =   896.49s
   [junit4] Execution time total: 14 minutes 59 seconds
   [junit4] Tests summary: 372 suites, 1603 tests, 5 errors, 1 failure, 26 
ignored (13 assumptions)
{noformat}

{noformat:title=trySsl = true, trySslClientAuth = false}
   [junit4] Tests with failures:
   [junit4]   - 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds
   [junit4] 
   [junit4] 
   [junit4] JVM J0: 2.27 ..   716.97 =   714.69s
   [junit4] JVM J1: 2.28 ..   716.79 =   714.51s
   [junit4] JVM J2: 2.27 ..   716.99 =   714.72s
   [junit4] JVM J3: 2.52 ..   716.78 =   714.26s
   [junit4] JVM J4: 2.28 ..   716.82 =   714.54s
   [junit4] JVM J5: 2.52 ..   717.00 =   714.48s
   [junit4] JVM J6: 2.28 ..   716.80 =   714.52s
   [junit4] JVM J7: 2.27 ..   716.95 =   714.67s
   [junit4] Execution time total: 11 minutes 57 seconds
   [junit4] Tests summary: 372 suites, 1603 tests, 1 failure, 26 ignored (13 
assumptions)
{noformat}

I attached a patch which cleans up a lot of the tests by using a common 
function to build a consistent schemed URL (fixes SSL for 
SolrCmdDistributorTest) + disabled the "useClientAuth" property for OSX 
clients. [~elyograg] was kind enough to perform a few test runs 
(BasicDistributedZkTest & BasicDistributedZk2Test) on both windows and linux 
with both SSL params set to true with clean test runs.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854v2.patch, SOLR-3854v3.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5716) Un-ignore FieldFacetTest

2014-02-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898788#comment-13898788
 ] 

Tomás Fernández Löbbe commented on SOLR-5716:
-

Yes. Also, I tried running the test without my changes but forcing 
LogByteSizeMergePolicy (my understanding is that it maintains docid order) and 
the test passes.

> Un-ignore FieldFacetTest
> 
>
> Key: SOLR-5716
> URL: https://issues.apache.org/jira/browse/SOLR-5716
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-5716.patch
>
>
> The test started failing after SOLR-5685, I think the problem was that the 
> test assumed the docid order to be maintained, which is not true when merges 
> occur. 
> I changed the test to compare stats elements in the response but not consider 
> the order of them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5716) Un-ignore FieldFacetTest

2014-02-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898746#comment-13898746
 ] 

Yonik Seeley commented on SOLR-5716:


Ah, that makes sense.  When you fixed the commit issue, the test started 
created multiple segments and exposed that test assumption.

> Un-ignore FieldFacetTest
> 
>
> Key: SOLR-5716
> URL: https://issues.apache.org/jira/browse/SOLR-5716
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-5716.patch
>
>
> The test started failing after SOLR-5685, I think the problem was that the 
> test assumed the docid order to be maintained, which is not true when merges 
> occur. 
> I changed the test to compare stats elements in the response but not consider 
> the order of them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5716) Un-ignore FieldFacetTest

2014-02-11 Thread JIRA
Tomás Fernández Löbbe created SOLR-5716:
---

 Summary: Un-ignore FieldFacetTest
 Key: SOLR-5716
 URL: https://issues.apache.org/jira/browse/SOLR-5716
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-5716.patch

The test started failing after SOLR-5685, I think the problem was that the test 
assumed the docid order to be maintained, which is not true when merges occur. 
I changed the test to compare stats elements in the response but not consider 
the order of them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5716) Un-ignore FieldFacetTest

2014-02-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-5716:


Attachment: SOLR-5716.patch

> Un-ignore FieldFacetTest
> 
>
> Key: SOLR-5716
> URL: https://issues.apache.org/jira/browse/SOLR-5716
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-5716.patch
>
>
> The test started failing after SOLR-5685, I think the problem was that the 
> test assumed the docid order to be maintained, which is not true when merges 
> occur. 
> I changed the test to compare stats elements in the response but not consider 
> the order of them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5715) CloudSolrServer should choose URLs that match _route_

2014-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5715:
--

Fix Version/s: 4.7
   5.0

> CloudSolrServer should choose URLs that match _route_
> -
>
> Key: SOLR-5715
> URL: https://issues.apache.org/jira/browse/SOLR-5715
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.6.1
>Reporter: Chase Bradford
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> When using CloudSolrServer to issue a request with a _route_ param, the URLs 
> passed to LBHttpSolrServer should be filtered to include only hosts serving a 
> slice.  If there's a single shard listed, then the query can be served 
> directly.  Otherwise, the cluster services 3 /select requests for the query.  
> As the host to replica ratio increases the probability of needing an extra 
> hop goes to one, putting unnecessary strain on the cluster's network.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898590#comment-13898590
 ] 

Mark Miller commented on SOLR-5714:
---

Yeah, will do.

> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5714.patch
>
>
> Currently, you have to specify how much direct memory to allocate per 
> SolrCore. This can be inefficient, and has some negative consequences - for 
> instance, when replicating, many times two HDFS directories will exist for 
> the same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5715) CloudSolrServer should choose URLs that match _route_

2014-02-11 Thread Chase Bradford (JIRA)
Chase Bradford created SOLR-5715:


 Summary: CloudSolrServer should choose URLs that match _route_
 Key: SOLR-5715
 URL: https://issues.apache.org/jira/browse/SOLR-5715
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.6.1
Reporter: Chase Bradford
Priority: Minor


When using CloudSolrServer to issue a request with a _route_ param, the URLs 
passed to LBHttpSolrServer should be filtered to include only hosts serving a 
slice.  If there's a single shard listed, then the query can be served 
directly.  Otherwise, the cluster services 3 /select requests for the query.  
As the host to replica ratio increases the probability of needing an extra hop 
goes to one, putting unnecessary strain on the cluster's network.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898519#comment-13898519
 ] 

Hoss Man commented on SOLR-5714:


bq. It defaults to false if not present in solrconfig.xml for back 
compatibility. The latest solrconfig.xml will default it to true.

Why not just make the implicit default (if not present in config) contingent on 
the luceneMatchVersion?  if < 4.7, default=false; else default=true ?



> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5714.patch
>
>
> Currently, you have to specify how much direct memory to allocate per 
> SolrCore. This can be inefficient, and has some negative consequences - for 
> instance, when replicating, many times two HDFS directories will exist for 
> the same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898517#comment-13898517
 ] 

Mark Miller commented on SOLR-5714:
---

This patch adds a new init param to HdfsDirectoryFactory that allows it to use 
a global block cache rather than a block cache per SolrCore instance. It 
defaults to false if not present in solrconfig.xml for back compatibility. The 
latest solrconfig.xml will default it to true.

A future improvement I'd like to make is the ability to configure some of the 
hdfs stuff at the solr.xml level, but I don't want to tie that into this issue.

First cut attached.

> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5714.patch
>
>
> Currently, you have to specify how much direct memory to allocate per 
> SolrCore. This can be inefficient, and has some negative consequences - for 
> instance, when replicating, many times two HDFS directories will exist for 
> the same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5714:
--

Attachment: SOLR-5714.patch

> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5714.patch
>
>
> Currently, you have to specify how much direct memory to allocate per 
> SolrCore. This can be inefficient, and has some negative consequences - for 
> instance, when replicating, many times two HDFS directories will exist for 
> the same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5714:
--

Description: Currently, you have to specify how much direct memory to 
allocate per SolrCore. This can be inefficient, and has some negative 
consequences - for instance, when replicating, many times two HDFS directories 
will exist for the same index briefly, which will double the RAM used for that 
SolrCore.  (was: Currently, you have to specify how much direct memory to 
allocate per SolrCore
. This can be inefficient, and has some negative consequences - for instance, 
when replicating, many times two HDFS directories will exist for the same index 
briefly, which will double the RAM used for that SolrCore.)

> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> Currently, you have to specify how much direct memory to allocate per 
> SolrCore. This can be inefficient, and has some negative consequences - for 
> instance, when replicating, many times two HDFS directories will exist for 
> the same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5714:
--

Description: 
Currently, you have to specify how much direct memory to allocate per SolrCore
. This can be inefficient, and has some negative consequences - for instance, 
when replicating, many times two HDFS directories will exist for the same index 
briefly, which will double the RAM used for that SolrCore.

  was:Currently, you have to specify how much direct memory to allocate per 
collection. This can be inefficient, and has some negative consequences - for 
instance, when replicating, many times two HDFS directories will exist for the 
same index briefly, which will double the RAM used for that SolrCore.


> You should be able to use one pool of memory for multiple collection's HDFS 
> block caches.
> -
>
> Key: SOLR-5714
> URL: https://issues.apache.org/jira/browse/SOLR-5714
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> Currently, you have to specify how much direct memory to allocate per SolrCore
> . This can be inefficient, and has some negative consequences - for instance, 
> when replicating, many times two HDFS directories will exist for the same 
> index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_60-ea-b04) - Build # 3770 - Still Failing!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3770/
Java: 64bit/jdk1.7.0_60-ea-b04 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([685F68CBD91E7C59:47F6CCDA0E91335D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 11418 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.i

[jira] [Updated] (SOLR-5257) confusing warning logged when unexpected xml attributes are found

2014-02-11 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5257:
---

Attachment: SOLR-5257.patch

bq. Fixed warning messages

Thanks Vitaliy, but what i had in mind was to be more complete and audit _all_ 
of the warnings produced by this class to make sure they are unambiguous for 
end users.

Attaching updated patch ... anyone see any concerns with these message changes?

> confusing warning logged when unexpected xml attributes are found
> -
>
> Key: SOLR-5257
> URL: https://issues.apache.org/jira/browse/SOLR-5257
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Minor
> Attachments: SOLR-5257.patch, SOLR-5257.patch
>
>
> Brian Robinson on the solr-user list got really confused by this warning 
> message...
> {{Unknown attribute id in add:allowDups}}
> ...the mention of "id" in that warning was a big red herring that led him to 
> assume something was wrong with the "id" in his documents, because it's not 
> at all clear that's refering to the "xml node id" of an unexpected "xml 
> attribute" (which in this case is "allowDups")
> filing this issue so i remembe to fix this warning to be more helpful, and 
> review the rest of the file while i'm at it for other confusing warnings.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 4.6.1-1

2014-02-11 Thread Chris Hostetter

: A release candidate is available from:
: http://people.apache.org/~vajda/staging_area/

+1 to the artifacts with these sigs...

hossman@frisbee:~/tmp/pylucene_4_6_1$ cat pylucene-4.6.1-1-src.tar.gz.md5
bac17fb194f16273afe1d0428e579d5b  pylucene-4.6.1-1-src.tar.gz
hossman@frisbee:~/tmp/pylucene_4_6_1$ cat pylucene-4.6.1-1-src.tar.gz.asc
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (Darwin)

iQIcBAABCgAGBQJS9UlUAAoJEIjifKIO12M/MkMQAM43U9o6PIF5QPqJoDxjMCRO
WHLttQqk1kIiEdZUjC8tjV+rSa7OjD6L6cQ80RDtf72N+VFFwApx66DiEXR5I7rn
7uKityeqR+QUZWh16FpwnarVShGVu7YeCGPhOez1kMvh18sPSdMhO1VACN5edxWs
JrIovQBsJGBggh97giI+l5qrzeCgwDS4n/PzXwTvM5QdqZ/SvCnSC1k02f9n4rzo
qPFs9FLpTrtpN+7NAZ3arp/HFIYNNrCW4DPFZIEKv1qm21PtMn/k/nVPmAgt1YCj
9TaPpD9k5eUNRB7Y3L3/DkSUOXoPMJHvw3vu6IAwQEzBnqo3ukloH6qUO/823+cR
uJyl3hjIQDzIDPztKQLR8l/RvLmeiIbtcPdSTcDq8iBz9XLQevty/lJaLdCQoVrt
uOQJN06Ia4bmOO3u41oaEVL8MLiez1NBxKVEzryk9o3pF9ctIE9CwNEX5ee2uhZI
GzZGT4njlu5X77Ur/GAhEnApyPOfccsHBMnpjvNT+mRD/xhwf87oiZPJNofA4AvY
PLJqAHt3EJBDjstyDivtsiC8kxX/Wtb4Vm/paCGgWqrXHii58HY9r/WNtWof12Tv
AtJpLAfszEMU4m1b1LHeuJad8fYGqFYzBM1awi3hQzHzj6uyDYIZR2YTHarYpX82
nvSgfYg/NapIbEmENj71
=NA6J
-END PGP SIGNATURE-


Tested on 64bit Ubuntu 12.04 using Java7...

* JCC installed w/o error...
  python setup.py build
  sudo python setup.py install
* pylucene tests ran w/o failure...
  make && make test
* pylucene installed w/o error...
  sudo make install





-Hoss
http://www.lucidworks.com/


Re: Stats vs Analytics

2014-02-11 Thread Steve Molloy



Thanks, will look further into it, but at first glance this looks like it may make things a lot more simple for me. 


This said, I'm still curious about the stats vs analytics approach for future releases. 

Thanks,
Steve

On Feb 11, 2014, at 5:03 PM, "Trey Grainger"  wrote:




Just to add more discussion to the mix, we're also building/using this at CareerBuilder: 
    "Percentiles for facets, pivot facets, and distributed pivot facets"
    https://issues.apache.org/jira/browse/SOLR-3583



It is an extension to (distributed pivot) faceting that allows stats to be collected within the faceting component. We built it with the following needs:
1) Supports pivot faceting (stats at each level)
2) Supports distributed statistical operations


If you look at slide 41 of this presentation, you'll get a really good feel for what this patch does:
http://www.slideshare.net/treygrainger/building-a-real-time-big-data-analytics-platform-with-solr



The primary focus initially was on calculating percentiles of numerical values in a distributed way (using bucketing similar to range faceting), but we are also in the process of adding distributed sum. Other distributable calculations are possible, we
 just haven't needed them yet so we haven't added them.


-Trey




On Tue, Feb 11, 2014 at 2:24 PM, Steve Molloy 
 wrote:

Trying to make sense of all issues around this and not sure which way to go. Both Stats and Analytics component are missing some features I would need. Stats cannot limit or order facets for instance, and I'd like to see pivot support. On the other end Analytics
 doesn't support distribution at all, which is a must in my case.

So, I guess what I'm trying to ask is whether I should look at extending Stats or Analytics? Which way is the community going for future releases? (Would share any extension, but that would be useless if done on the wrong component).

Thanks,
Steve

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org










-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b04) - Build # 9338 - Failure!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9338/
Java: 32bit/jdk1.7.0_60-ea-b04 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 1258 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp/junit4-J1-20140211_220135_876.sysout
   [junit4] >>> JVM J1: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0xf6feca95, pid=17370, tid=4089445184
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_60-b04) (build 
1.7.0_60-ea-b04)
   [junit4] # Java VM: Java HotSpot(TM) Client VM (24.60-b07 mixed mode 
linux-x86 )
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x3e8a95]  
objArrayKlass::oop_oop_iterate_nv(oopDesc*, ParScanWithoutBarrierClosure*)+0x75
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid17370.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 447 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/var/lib/jenkins/tools/java/32bit/jdk1.7.0_60-ea-b04/jre/bin/java -client 
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=8623065AAEC4170D -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.7 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.7-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=UTF-8 -classpath 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.13.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/AN

[jira] [Updated] (SOLR-5076) Make it possible to get list of collections with CollectionsHandler

2014-02-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5076:
---

Component/s: SolrCloud

> Make it possible to get list of collections with CollectionsHandler
> ---
>
> Key: SOLR-5076
> URL: https://issues.apache.org/jira/browse/SOLR-5076
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Priority: Minor
>
> It would be very useful to have /admin/collections (CollectionsHandler) send 
> a response similar to /admin/cores.  This should probably be the default 
> action, but requiring ?action=STATUS wouldn't be the end of the world.
> It would be very useful if CloudSolrServer were to implement a getCollections 
> method, but that probably should be a separate issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Stats vs Analytics

2014-02-11 Thread Trey Grainger
Just to add more discussion to the mix, we're also building/using this at
CareerBuilder:
"Percentiles for facets, pivot facets, and distributed pivot facets"
https://issues.apache.org/jira/browse/SOLR-3583

It is an extension to (distributed pivot) faceting that allows stats to be
collected within the faceting component. We built it with the following
needs:
1) Supports pivot faceting (stats at each level)
2) Supports distributed statistical operations

If you look at slide 41 of this presentation, you'll get a really good feel
for what this patch does:
http://www.slideshare.net/treygrainger/building-a-real-time-big-data-analytics-platform-with-solr

The primary focus initially was on calculating percentiles of numerical
values in a distributed way (using bucketing similar to range faceting),
but we are also in the process of adding distributed sum. Other
distributable calculations are possible, we just haven't needed them yet so
we haven't added them.

-Trey


On Tue, Feb 11, 2014 at 2:24 PM, Steve Molloy  wrote:

> Trying to make sense of all issues around this and not sure which way to
> go. Both Stats and Analytics component are missing some features I would
> need. Stats cannot limit or order facets for instance, and I'd like to see
> pivot support. On the other end Analytics doesn't support distribution at
> all, which is a must in my case.
>
> So, I guess what I'm trying to ask is whether I should look at extending
> Stats or Analytics? Which way is the community going for future releases?
> (Would share any extension, but that would be useless if done on the wrong
> component).
>
> Thanks,
> Steve
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Created] (SOLR-5714) You should be able to use one pool of memory for multiple collection's HDFS block caches.

2014-02-11 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5714:
-

 Summary: You should be able to use one pool of memory for multiple 
collection's HDFS block caches.
 Key: SOLR-5714
 URL: https://issues.apache.org/jira/browse/SOLR-5714
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7


Currently, you have to specify how much direct memory to allocate per 
collection. This can be inefficient, and has some negative consequences - for 
instance, when replicating, many times two HDFS directories will exist for the 
same index briefly, which will double the RAM used for that SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5655) Create a stopword filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898360#comment-13898360
 ] 

Timothy Potter commented on SOLR-5655:
--

Should have provided some details about the API ...

To activate, you would need to declare a filter in schema.xml as:


  


  


To see the list of managed stopwords for the "english" handle:

curl -i -v 
"http://localhost:8984/solr//schema/analysis/stopwords/english"

This would return a JSON object/map that looks like:

{
  "initArgs":{"ignoreCase":"true"},
  "initializedOn":"2014-02-10T16:23:55.247Z",
  "managedList":[
"a",
"an",
"and",
"are",
"as", … ] }

To add some stop words to the set, you'd do:

curl -v -X PUT \
  -H 'Content-type:application/json' \
  --data-binary '["foo"]' \
  
'http://localhost:8984/solr//schema/analysis/stopwords/english'

You can also just get a single word, which will raise a 404 if it is not in the 
set:

curl -i -v 
"http://localhost:8984/solr//schema/analysis/stopwords/english/the"

Lastly, just to be clear, none of the changes made by the API will be "applied" 
to the underlying analysis components (in this case the StopFilter) until the 
core is reloaded.



> Create a stopword filter factory that is (re)configurable, and capable of 
> reporting its configuration, via REST API
> ---
>
> Key: SOLR-5655
> URL: https://issues.apache.org/jira/browse/SOLR-5655
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Steve Rowe
> Attachments: SOLR-5655.patch
>
>
> A stopword filter factory could be (re)configurable via REST API by 
> registering with the RESTManager described in SOLR-5653, and then responding 
> to REST API calls to modify its init params and its stopwords resource file.
> Read-only (GET) REST API calls should also be provided, both for init params 
> and the stopwords resource file.
> It should be possible to add/remove one or more entries in the stopwords 
> resource file.
> We should probably use JSON for the REST request body, as is done in the 
> Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5649) Small ConnectionManager improvements

2014-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5649.
---

Resolution: Fixed

Thanks Greg!

> Small ConnectionManager improvements
> 
>
> Key: SOLR-5649
> URL: https://issues.apache.org/jira/browse/SOLR-5649
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Gregory Chanan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5649.patch
>
>
> I was just looking through the ConnectionManager and want to jot these down 
> before I forget them.  I'm happy to make a patch if someone thinks it's 
> valuable as well.
> - "clientConnected" doesn't seem to be read, can be eliminated
> - "state" is a private volatile variable, but only used in one function -- 
> seems unlikely private volatile is what is wanted
> - A comment explaining why disconnected() is not called in the case of 
> Expired would be helpful (Expired means we have already waited the timeout 
> period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5649) Small ConnectionManager improvements

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898324#comment-13898324
 ] 

ASF subversion and git services commented on SOLR-5649:
---

Commit 1567400 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1567400 ]

SOLR-5649: Clean up some minor ConnectionManager issues.

> Small ConnectionManager improvements
> 
>
> Key: SOLR-5649
> URL: https://issues.apache.org/jira/browse/SOLR-5649
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Gregory Chanan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5649.patch
>
>
> I was just looking through the ConnectionManager and want to jot these down 
> before I forget them.  I'm happy to make a patch if someone thinks it's 
> valuable as well.
> - "clientConnected" doesn't seem to be read, can be eliminated
> - "state" is a private volatile variable, but only used in one function -- 
> seems unlikely private volatile is what is wanted
> - A comment explaining why disconnected() is not called in the case of 
> Expired would be helpful (Expired means we have already waited the timeout 
> period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5649) Small ConnectionManager improvements

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898321#comment-13898321
 ] 

ASF subversion and git services commented on SOLR-5649:
---

Commit 1567399 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1567399 ]

SOLR-5649: Clean up some minor ConnectionManager issues.

> Small ConnectionManager improvements
> 
>
> Key: SOLR-5649
> URL: https://issues.apache.org/jira/browse/SOLR-5649
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Gregory Chanan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5649.patch
>
>
> I was just looking through the ConnectionManager and want to jot these down 
> before I forget them.  I'm happy to make a patch if someone thinks it's 
> valuable as well.
> - "clientConnected" doesn't seem to be read, can be eliminated
> - "state" is a private volatile variable, but only used in one function -- 
> seems unlikely private volatile is what is wanted
> - A comment explaining why disconnected() is not called in the case of 
> Expired would be helpful (Expired means we have already waited the timeout 
> period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Trunk tests fail (on Mac OSX)

2014-02-11 Thread Per Steffensen

I ran BasicDistributedZk2Test
* 5 times with SolrTestCaseJ4.ALLOW_SSL = false: 5 green runs
* 5 times with SolrTestCaseJ4.ALLOW_SSL = true: 2 green and 3 red runs
Do not know if that is statistics enough to conclude anything, but it 
smells a little like a SSL issue, if you ask me.


Mac OSX Maverick (10.9.1)
java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Regards, Per Steffensen

On 11/02/14 17:14, Steve Davids wrote:
A way to isolate if this is an SSL issue, you can turn off SSL for 
that specific test by adding:

static {
 ALLOW_SSL = false;
}
to the test class. Although, I have seen that test error out every so 
often in non-SSL mode. Will take a look later on tonight though.


-Steve


On Tue, Feb 11, 2014 at 11:06 AM, Uwe Schindler > wrote:


Hi,

this looks like some bug in the MacOSX libc. Sometimes, Java 7
also crashes on OSX in this code part (because it segfaults when
producing the error message). There is already a bug open at
Oracle, but they have no idea how to fix. Currently it is
impossible to run something like Tomcat or Jetty on an OSX server
in production...

https://bugs.openjdk.java.net/browse/JDK-8024045

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de 


> -Original Message-
> From: Per Steffensen [mailto:st...@designware.dk
]
> Sent: Tuesday, February 11, 2014 4:46 PM
> To: dev@lucene.apache.org 
> Subject: Trunk tests fail (on Mac OSX)
>
> I am sure you have noticed it from Jenkins, but tests fail (most
of the
> time) on trunk. E.g. I have been running BasicDistributedZk2Test
numerous
> times from Eclipse on my Mac. Revision 1567049 on trunk.
> Sometimes the test is green, but most of the times it is not. It
is random
> exactly where it fails, but there is an example below. I believe
that when it
> fails it is always with the kind of exception
> (java.net.SocketException: Invalid argument) shown at the bottom. My
> guess it that is has to do with SOLR-3854.
>
> Anyone working on fixing this? Any current knowledge about what the
> problem is? An estimate on when things are consistently green
again on
> trunk?
>
> Regards, Per Steffensen
>
> - An example of where the test fails
-
> org.apache.solr.client.solrj.SolrServerException: No live
SolrServers available
> to handle this request:[https://127.0.0.1:58496/qls/z/collection1,
> https://127.0.0.1:58500/qls/z/collection1,
> https://127.0.0.1:58493/qls/z/collection1,
> https://127.0.0.1:58490/qls/z/collection1]
>  at
> __randomizedtesting.SeedInfo.seed([F0E9F84409868187:710F765C7ED9E1BB
> ]:0)
>  at
>

org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.j
> ava:352)
>  at
>

org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.jav
> a:635)
>  at
>
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.ja
> va:90)
>  at
org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
>  at
>
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFul
> lDistribZkTestBase.java:1356)
>  at
>
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:561)
>  at
>
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:543)
>  at
>
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:522)
>  at
>
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractF
> ullDistribZkTestBase.java:754)
>  at
>
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Te
> st.java:107)
>  at
>
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistri
> butedSearchTestCase.java:868)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>  at
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
> dRunner.java:1559)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(Rando
> mizedRunner.java:79)
>  at
> com.carrotsear

[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898238#comment-13898238
 ] 

ASF subversion and git services commented on SOLR-1301:
---

Commit 1567340 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1567340 ]

SOLR-1301: Implement the set-map-reduce-classpath.sh script.

> Add a Solr contrib that allows for building Solr indexes via Hadoop's 
> Map-Reduce.
> -
>
> Key: SOLR-1301
> URL: https://issues.apache.org/jira/browse/SOLR-1301
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
> SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
> commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
> hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
> log4j-1.2.15.jar
>
>
> This patch contains  a contrib module that provides distributed indexing 
> (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
> twofold:
> * provide an API that is familiar to Hadoop developers, i.e. that of 
> OutputFormat
> * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
> SolrOutputFormat consumes data produced by reduce tasks directly, without 
> storing it in intermediate files. Furthermore, by using an 
> EmbeddedSolrServer, the indexing task is split into as many parts as there 
> are reducers, and the data to be indexed is not sent over the network.
> Design
> --
> Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
> which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
> instantiates an EmbeddedSolrServer, and it also instantiates an 
> implementation of SolrDocumentConverter, which is responsible for turning 
> Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
> batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
> task completes, and the OutputFormat is closed, SolrRecordWriter calls 
> commit() and optimize() on the EmbeddedSolrServer.
> The API provides facilities to specify an arbitrary existing solr.home 
> directory, from which the conf/ and lib/ files will be taken.
> This process results in the creation of as many partial Solr home directories 
> as there were reduce tasks. The output shards are placed in the output 
> directory on the default filesystem (e.g. HDFS). Such part-N directories 
> can be used to run N shard servers. Additionally, users can specify the 
> number of reduce tasks, in particular 1 reduce task, in which case the output 
> will consist of a single shard.
> An example application is provided that processes large CSV files and uses 
> this API. It uses a custom CSV processing to avoid (de)serialization overhead.
> This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
> issue, you should put it in contrib/hadoop/lib.
> Note: the development of this patch was sponsored by an anonymous contributor 
> and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898232#comment-13898232
 ] 

ASF subversion and git services commented on SOLR-1301:
---

Commit 1567337 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1567337 ]

SOLR-1301: Implement the set-map-reduce-classpath.sh script.

> Add a Solr contrib that allows for building Solr indexes via Hadoop's 
> Map-Reduce.
> -
>
> Key: SOLR-1301
> URL: https://issues.apache.org/jira/browse/SOLR-1301
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
> SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
> commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
> hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
> log4j-1.2.15.jar
>
>
> This patch contains  a contrib module that provides distributed indexing 
> (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
> twofold:
> * provide an API that is familiar to Hadoop developers, i.e. that of 
> OutputFormat
> * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
> SolrOutputFormat consumes data produced by reduce tasks directly, without 
> storing it in intermediate files. Furthermore, by using an 
> EmbeddedSolrServer, the indexing task is split into as many parts as there 
> are reducers, and the data to be indexed is not sent over the network.
> Design
> --
> Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
> which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
> instantiates an EmbeddedSolrServer, and it also instantiates an 
> implementation of SolrDocumentConverter, which is responsible for turning 
> Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
> batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
> task completes, and the OutputFormat is closed, SolrRecordWriter calls 
> commit() and optimize() on the EmbeddedSolrServer.
> The API provides facilities to specify an arbitrary existing solr.home 
> directory, from which the conf/ and lib/ files will be taken.
> This process results in the creation of as many partial Solr home directories 
> as there were reduce tasks. The output shards are placed in the output 
> directory on the default filesystem (e.g. HDFS). Such part-N directories 
> can be used to run N shard servers. Additionally, users can specify the 
> number of reduce tasks, in particular 1 reduce task, in which case the output 
> will consist of a single shard.
> An example application is provided that processes large CSV files and uses 
> this API. It uses a custom CSV processing to avoid (de)serialization overhead.
> This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
> issue, you should put it in contrib/hadoop/lib.
> Note: the development of this patch was sponsored by an anonymous contributor 
> and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Stats vs Analytics

2014-02-11 Thread Steve Molloy
Trying to make sense of all issues around this and not sure which way to go. 
Both Stats and Analytics component are missing some features I would need. 
Stats cannot limit or order facets for instance, and I'd like to see pivot 
support. On the other end Analytics doesn't support distribution at all, which 
is a must in my case.

So, I guess what I'm trying to ask is whether I should look at extending Stats 
or Analytics? Which way is the community going for future releases? (Would 
share any extension, but that would be useless if done on the wrong component).

Thanks,
Steve

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5713) zkCli.sh should extract the webapp if it has not yet been extracted.

2014-02-11 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5713:
-

 Summary: zkCli.sh should extract the webapp if it has not yet been 
extracted.
 Key: SOLR-5713
 URL: https://issues.apache.org/jira/browse/SOLR-5713
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1320 - Still Failing!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1320/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 10382 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20140211_184154_646.syserr
   [junit4] >>> JVM J0: stderr (verbatim) 
   [junit4] java(434,0x14d907000) malloc: *** error for object 0x14d8f5d34: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4] <<< JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=4A3F28A79351DD27 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -classpath 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queries/lucene-queries-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queryparser/lucene-queryparser-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/join/lucene-join-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/antlr-runtime-3.5.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-commons-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-codec-1.7.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-configuration-1.6.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-fileupload-1.2.1.jar:

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-fcs-b128) - Build # 9440 - Failure!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9440/
Java: 64bit/jdk1.8.0-fcs-b128 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.testArgsParserHelp

Error Message:
Conversion = '१'

Stack Trace:
java.util.UnknownFormatConversionException: Conversion = '१'
at 
__randomizedtesting.SeedInfo.seed([2A3DFA852D15816E:B6EF50F26A7BED23]:0)
at java.util.Formatter.checkText(Formatter.java:2579)
at java.util.Formatter.parse(Formatter.java:2555)
at java.util.Formatter.format(Formatter.java:2501)
at java.io.PrintWriter.format(PrintWriter.java:905)
at 
net.sourceforge.argparse4j.helper.TextHelper.printHelp(TextHelper.java:206)
at 
net.sourceforge.argparse4j.internal.ArgumentImpl.printHelp(ArgumentImpl.java:247)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.printArgumentHelp(ArgumentParserImpl.java:253)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.printHelp(ArgumentParserImpl.java:279)
at 
org.apache.solr.hadoop.MapReduceIndexerTool$MyArgumentParser$1.run(MapReduceIndexerTool.java:187)
at 
net.sourceforge.argparse4j.internal.ArgumentImpl.run(ArgumentImpl.java:425)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.processArg(ArgumentParserImpl.java:913)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:810)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:683)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:580)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:573)
at 
org.apache.solr.hadoop.MapReduceIndexerTool$MyArgumentParser.parseArgs(MapReduceIndexerTool.java:505)
at 
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.testArgsParserHelp(MapReduceIndexerToolArgumentParserTest.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 

[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898051#comment-13898051
 ] 

Yonik Seeley commented on SOLR-5234:


Let's take a more specific example and see if that represents a security risk:
A field type in schema.xml containing a SynonymFilter loading a large synonym 
file from a shared NFS mount.
If that use case does not have any security issues, then we should figure out 
how to support it (and things like it).

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5423) CSV output doesn't include function field

2014-02-11 Thread Arun Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Kumar updated SOLR-5423:
-

Attachment: SOLR-5423.patch

Attached the updated patch with a supporting unit test to test the fix.

> CSV output doesn't include function field
> -
>
> Key: SOLR-5423
> URL: https://issues.apache.org/jira/browse/SOLR-5423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: James Wilson
> Attachments: SOLR-5423.patch
>
>
> Given a schema with 
>
>
>   
> the following query returns no rows:
> http://localhost:8983/solr/collection1/select?q=*%3A*&rows=30&fl=div(price%2Cnumpages)&wt=csv&indent=true
> However, setting wt=json or wt=xml, it works.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5423) CSV output doesn't include function field

2014-02-11 Thread Arun Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Kumar updated SOLR-5423:
-

Attachment: (was: SOLR-5423.patch)

> CSV output doesn't include function field
> -
>
> Key: SOLR-5423
> URL: https://issues.apache.org/jira/browse/SOLR-5423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: James Wilson
> Attachments: SOLR-5423.patch
>
>
> Given a schema with 
>
>
>   
> the following query returns no rows:
> http://localhost:8983/solr/collection1/select?q=*%3A*&rows=30&fl=div(price%2Cnumpages)&wt=csv&indent=true
> However, setting wt=json or wt=xml, it works.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898027#comment-13898027
 ] 

Alan Woodward commented on SOLR-5234:
-

The problem comes when it's loading something like XML.  The CVE issue Uwe 
linked above is very clear (and slightly terrifying) about how you can break 
into a system running Solr if you can upload some XML somewhere onto the system 
- I had no idea that you could run arbitrary Java code within an XML parser, 
but it turns out you can!

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898019#comment-13898019
 ] 

Yonik Seeley commented on SOLR-5234:


It doesn't seem like this would be a security issue since it's at a lower level 
(i.e. if an attacker can add something to ZK that points to /etc/passwd, then 
they can already do any number of bad things to the cluster).  It's like saying 
"vi" is a security risk because it can read your files.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898000#comment-13898000
 ] 

Uwe Schindler commented on SOLR-5234:
-

This might be a possibility, but {{/etc/passwd}} is also a text file :-) ! We 
have to differentiate here between the two factors: "who" calls openResource 
for "what". If "who" is *not* coming from the network (e.g. it' triggered by a 
filename received via REST API), it is fine to load text files. But not if a 
velocity template tries to load {{/etc/passwd}} and send it to the client or if 
a input file xincludes some content.
Also the "what" should be restricted: Don't load XML files (unless you disable 
xinclude and external entities in XML) or other active contents from anywhere.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897993#comment-13897993
 ] 

Alan Woodward commented on SOLR-5234:
-

Yeah, the reason I'd not committed it was due to SOLR-4882.

Maybe there's a workaround with the allow.unsafe.resourceloading environment 
variable?  Or we could add a parameter to SolrResourceLoader.openResource() 
that says we allow unsafe loading for this call?  That way the various stemmers 
and other analysis components that are just loading text files can load from 
anywhere, but XSLT or anything else that loads and then runs executable code is 
suitably sandboxed.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897988#comment-13897988
 ] 

Uwe Schindler edited comment on SOLR-5234 at 2/11/14 4:31 PM:
--

Sorry, if we would allow to load arbitrary files from outside the instance 
directory by SolrResourceLoader, we would re-add the CVE security issues closed 
recently - so this is a no-go, see SOLR-4882.

The only possibility to add this would to restrict this feature strictly to an 
URL-prefix via config.


was (Author: thetaphi):
Sorry, if we would allow to load arbitrary files from outside the instance 
directory by SolrResourceLoader, we would re-add the CVE security issues closed 
recently - so this is a no-go, see SOLR-4882.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897988#comment-13897988
 ] 

Uwe Schindler commented on SOLR-5234:
-

Sorry, if we would allow to load arbitrary files from outside the instance 
directory by SolrResourceLoader, we would re-add the CVE security issues closed 
recently - so this is a no-go, see SOLR-4882.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897983#comment-13897983
 ] 

Mark Miller commented on SOLR-3854:
---

As reported on the email list, using SSL with "BasicDistributedZk2Test" on OSX 
causes the following exception:

{noformat}
Caused by: java.net.SocketException: Invalid argument
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at 
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at 
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:395)
... 11 more
{noformat}

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854v2.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897977#comment-13897977
 ] 

ASF subversion and git services commented on SOLR-3854:
---

Commit 1567203 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1567203 ]

SOLR-3854 : Disable SSL on OSX for this test for now.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854v2.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Trunk tests fail (on Mac OSX)

2014-02-11 Thread Steve Davids
A way to isolate if this is an SSL issue, you can turn off SSL for that
specific test by adding:

static {
ALLOW_SSL = false;
}

to the test class. Although, I have seen that test error out every so often
in non-SSL mode. Will take a look later on tonight though.

-Steve


On Tue, Feb 11, 2014 at 11:06 AM, Uwe Schindler  wrote:

> Hi,
>
> this looks like some bug in the MacOSX libc. Sometimes, Java 7 also
> crashes on OSX in this code part (because it segfaults when producing the
> error message). There is already a bug open at Oracle, but they have no
> idea how to fix. Currently it is impossible to run something like Tomcat or
> Jetty on an OSX server in production...
>
> https://bugs.openjdk.java.net/browse/JDK-8024045
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Per Steffensen [mailto:st...@designware.dk]
> > Sent: Tuesday, February 11, 2014 4:46 PM
> > To: dev@lucene.apache.org
> > Subject: Trunk tests fail (on Mac OSX)
> >
> > I am sure you have noticed it from Jenkins, but tests fail (most of the
> > time) on trunk. E.g. I have been running BasicDistributedZk2Test numerous
> > times from Eclipse on my Mac. Revision 1567049 on trunk.
> > Sometimes the test is green, but most of the times it is not. It is
> random
> > exactly where it fails, but there is an example below. I believe that
> when it
> > fails it is always with the kind of exception
> > (java.net.SocketException: Invalid argument) shown at the bottom. My
> > guess it that is has to do with SOLR-3854.
> >
> > Anyone working on fixing this? Any current knowledge about what the
> > problem is? An estimate on when things are consistently green again on
> > trunk?
> >
> > Regards, Per Steffensen
> >
> > - An example of where the test fails
> -
> > org.apache.solr.client.solrj.SolrServerException: No live SolrServers
> available
> > to handle this request:[https://127.0.0.1:58496/qls/z/collection1,
> > https://127.0.0.1:58500/qls/z/collection1,
> > https://127.0.0.1:58493/qls/z/collection1,
> > https://127.0.0.1:58490/qls/z/collection1]
> >  at
> > __randomizedtesting.SeedInfo.seed([F0E9F84409868187:710F765C7ED9E1BB
> > ]:0)
> >  at
> >
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.j
> > ava:352)
> >  at
> >
> org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.jav
> > a:635)
> >  at
> > org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.ja
> > va:90)
> >  at
> org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
> >  at
> >
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFul
> > lDistribZkTestBase.java:1356)
> >  at
> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> > hTestCase.java:561)
> >  at
> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> > hTestCase.java:543)
> >  at
> > org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> > hTestCase.java:522)
> >  at
> >
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractF
> > ullDistribZkTestBase.java:754)
> >  at
> >
> org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Te
> > st.java:107)
> >  at
> >
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistri
> > butedSearchTestCase.java:868)
> >  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >  at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> > ava:57)
> >  at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> > sorImpl.java:43)
> >  at java.lang.reflect.Method.invoke(Method.java:606)
> >  at
> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
> > dRunner.java:1559)
> >  at
> > com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(Rando
> > mizedRunner.java:79)
> >  at
> > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando
> > mizedRunner.java:737)
> >  at
> > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando
> > mizedRunner.java:773)
> >  at
> > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando
> > mizedRunner.java:787)
> >  at
> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.
> > evaluate(SystemPropertiesRestoreRule.java:53)
> >  at
> > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule
> > SetupTeardownChained.java:50)
> >  at
> >
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCa
> > cheSanity.java:51)
> >  at
> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
> > fterRule.java:46)
> >  at
> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
> > .evalu

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_60-ea-b04) - Build # 3769 - Still Failing!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3769/
Java: 32bit/jdk1.7.0_60-ea-b04 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3D2C1AEEDF2D88D4:1285BEFF08A2C7D0]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 11412 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.impl.BasicHttpSo

[jira] [Commented] (SOLR-5146) Figure out what it would take for lazily-loaded cores to play nice with SolrCloud

2014-02-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897965#comment-13897965
 ] 

Erick Erickson commented on SOLR-5146:
--

The tantalizing bit is it could "just work". Anything that uses "getCore" (as I 
remember)
will autoload the core and carry out the request. That includes updates as well 
as 
queries. The trap here would be the time involved. Say some replicas had to be 
loaded,
would a request timeout? In any lightly-loaded system, the chance that some of 
the
replicas are down increases of course..

I haven't really thought it out, but the process most true the the notion of a 
single machine
with lots of cores seems to be a cluster-wide sense of what should be loaded. 
In fact,
I might think of it as collection rather than shard being transient. I doubt 
one could use
ZK for this as it would require that every request to every node get some info 
from ZK.

Hmmm, and my ignorance of ZK is showing, but is it possible for ZK to raise 
"load/unload
yourself" events to the cluster? Mostly spinning half-baked ideas here, you 
know the 
ZK code far, far better than I do...

What fun!

> Figure out what it would take for lazily-loaded cores to play nice with 
> SolrCloud
> -
>
> Key: SOLR-5146
> URL: https://issues.apache.org/jira/browse/SOLR-5146
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5, 5.0
>Reporter: Erick Erickson
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0
>
>
> The whole lazy-load core thing was implemented with non-SolrCloud use-cases 
> in mind. There are several user-list threads that ask about using lazy cores 
> with SolrCloud, especially in multi-tenant use-cases.
> This is a marker JIRA to investigate what it would take to make lazy-load 
> cores play nice with SolrCloud. It's especially interesting how this all 
> works with shards, replicas, leader election, recovery, etc.
> NOTE: This is pretty much totally unexplored territory. It may be that a few 
> trivial modifications are all that's needed. OTOH, It may be that we'd have 
> to rip apart SolrCloud to handle this case. Until someone dives into the 
> code, we don't know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Trunk tests fail (on Mac OSX)

2014-02-11 Thread Uwe Schindler
Hi,

this looks like some bug in the MacOSX libc. Sometimes, Java 7 also crashes on 
OSX in this code part (because it segfaults when producing the error message). 
There is already a bug open at Oracle, but they have no idea how to fix. 
Currently it is impossible to run something like Tomcat or Jetty on an OSX 
server in production...

https://bugs.openjdk.java.net/browse/JDK-8024045

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Per Steffensen [mailto:st...@designware.dk]
> Sent: Tuesday, February 11, 2014 4:46 PM
> To: dev@lucene.apache.org
> Subject: Trunk tests fail (on Mac OSX)
> 
> I am sure you have noticed it from Jenkins, but tests fail (most of the
> time) on trunk. E.g. I have been running BasicDistributedZk2Test numerous
> times from Eclipse on my Mac. Revision 1567049 on trunk.
> Sometimes the test is green, but most of the times it is not. It is random
> exactly where it fails, but there is an example below. I believe that when it
> fails it is always with the kind of exception
> (java.net.SocketException: Invalid argument) shown at the bottom. My
> guess it that is has to do with SOLR-3854.
> 
> Anyone working on fixing this? Any current knowledge about what the
> problem is? An estimate on when things are consistently green again on
> trunk?
> 
> Regards, Per Steffensen
> 
> - An example of where the test fails -
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available
> to handle this request:[https://127.0.0.1:58496/qls/z/collection1,
> https://127.0.0.1:58500/qls/z/collection1,
> https://127.0.0.1:58493/qls/z/collection1,
> https://127.0.0.1:58490/qls/z/collection1]
>  at
> __randomizedtesting.SeedInfo.seed([F0E9F84409868187:710F765C7ED9E1BB
> ]:0)
>  at
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.j
> ava:352)
>  at
> org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.jav
> a:635)
>  at
> org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.ja
> va:90)
>  at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
>  at
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFul
> lDistribZkTestBase.java:1356)
>  at
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:561)
>  at
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:543)
>  at
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
> hTestCase.java:522)
>  at
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractF
> ullDistribZkTestBase.java:754)
>  at
> org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Te
> st.java:107)
>  at
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistri
> butedSearchTestCase.java:868)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
> dRunner.java:1559)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(Rando
> mizedRunner.java:79)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando
> mizedRunner.java:737)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando
> mizedRunner.java:773)
>  at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando
> mizedRunner.java:787)
>  at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.
> evaluate(SystemPropertiesRestoreRule.java:53)
>  at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule
> SetupTeardownChained.java:50)
>  at
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCa
> cheSanity.java:51)
>  at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
> fterRule.java:46)
>  at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
> .evaluate(SystemPropertiesInvariantRule.java:55)
>  at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh
> readAndTestName.java:49)
>  at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
> IgnoreAfterMaxFailures.java:70)
>  at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
> .java:48)
>  at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
>  at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.
> run(ThreadLeakC

[jira] [Updated] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5440:
---

Attachment: LUCENE-5440-solr.patch

Patch resolves an error I had w/ grouping (under Solr) and improves FBS 
assertions errors. I also returned the internal annotation to both FBS and 
LongBitSet, until we resolve that matter on LUCENE-5441. Another thing -- I 
added an assert to FBS.or/xor if the given set.length() was bigger than current 
-- we silently discarded bits!

All Solr tests pass now (except testDistribSearch which seem to fail constantly 
recently). I'd appreciate if someone can give it a second look.

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440-solr.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897949#comment-13897949
 ] 

Shai Erera commented on LUCENE-5440:


bq. I don't think FixedBitSet should be external.

+1. I mistakenly removed the \@lucene.internal annotation, will add it back in 
the new patch. Our API isn't FixedBitSet, it's Filter/DocIdSet. And we offer 
DocIdBitSet (external) to use w/ Java's BitSet. It's not true that users cannot 
write their own Filters - they can write them using DocIdBitSet, or risk and 
use the internal FixedBitSet. I wouldn't want to see FBS stays w/ that name, 
just because once there was OpenBitSet - renaming (just as removing 'extends 
DocIdSet') is a trivial change to your code when you migrate...

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897947#comment-13897947
 ] 

Michael McCandless commented on LUCENE-5441:


I think we should put the @lucene.internal back onto FixedBitSet; I don't think 
it should have been removed in LUCENE-5440 (see my comment there: 
https://issues.apache.org/jira/browse/LUCENE-5440?focusedCommentId=13897826&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13897826
 )

+1 to rename FBS to IntBitSet.

> Decouple DocIdSet from OpenBitSet and FixedBitSet
> -
>
> Key: LUCENE-5441
> URL: https://issues.apache.org/jira/browse/LUCENE-5441
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/other
>Affects Versions: 4.6.1
>Reporter: Uwe Schindler
> Fix For: 5.0
>
> Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch
>
>
> Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
> kept the stupid "filters can return a BitSet directly" in the code. So lots 
> of Filters return just FixedBitSet, because this is the superclass (ideally 
> interface) of FixedBitSet.
> We should decouple that and *not* implement that abstract interface directly 
> by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
> in a wrong way, just because it was always returning Bitsets. But some 
> filters actually don't do this.
> I propose to let FixedBitSet (only in trunk, because that a major backwards 
> break) just have a method {{asDocIdSet()}}, that returns an anonymous 
> instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
> returns a new Iterator (like it always did) and the cost/cacheable methods 
> return static values.
> Filters in trunk would need to be changed like that:
> {code:java}
> FixedBitSet bits = 
> ...
> return bits;
> {code}
> gets:
> {code:java}
> FixedBitSet bits = 
> ...
> return bits.asDocIdSet();
> {code}
> As this methods returns an anonymous DocIdSet, calling code can no longer 
> rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897946#comment-13897946
 ] 

Uwe Schindler commented on LUCENE-5440:
---

bq. I don't think FixedBitSet should be external. Our purpose here is to 
provide search APIs, not bitset utility APIs, and we should not have to commit 
to API back compatibility for this class or other such utility classes.

I disagree: If this is our case, we have to do more APIs internal and also hide 
stuff like AtomicReader, because its not useful for the end user. FixedBitSet 
is currently the only way for users to write own filters, unless they write 
their own DocIdSets. So to support filtering results, users have to implement 
the DocIdSet, Bits and DISI interfaces (which are public), so at least one 
implementation (the recommended one) should be public and stable.

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897925#comment-13897925
 ] 

ASF subversion and git services commented on LUCENE-5440:
-

Commit 1567185 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1567185 ]

LUCENE-5440: add back elasticity assumptions

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Trunk tests fail (on Mac OSX)

2014-02-11 Thread Per Steffensen
I am sure you have noticed it from Jenkins, but tests fail (most of the 
time) on trunk. E.g. I have been running BasicDistributedZk2Test 
numerous times from Eclipse on my Mac. Revision 1567049 on trunk. 
Sometimes the test is green, but most of the times it is not. It is 
random exactly where it fails, but there is an example below. I believe 
that when it fails it is always with the kind of exception 
(java.net.SocketException: Invalid argument) shown at the bottom. My 
guess it that is has to do with SOLR-3854.


Anyone working on fixing this? Any current knowledge about what the 
problem is? An estimate on when things are consistently green again on 
trunk?


Regards, Per Steffensen

- An example of where the test fails -
org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
available to handle this 
request:[https://127.0.0.1:58496/qls/z/collection1, 
https://127.0.0.1:58500/qls/z/collection1, 
https://127.0.0.1:58493/qls/z/collection1, 
https://127.0.0.1:58490/qls/z/collection1]
at 
__randomizedtesting.SeedInfo.seed([F0E9F84409868187:710F765C7ED9E1BB]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:635)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)

at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1356)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:561)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:543)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:522)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractFullDistribZkTestBase.java:754)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAf

Re: [JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12501 - Failure!

2014-02-11 Thread Shai Erera
I committed a fix.

Shai


On Tue, Feb 11, 2014 at 4:57 PM,  wrote:

> Build: builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12501/
>
> 1 tests failed.
> REGRESSION:
>  
> org.apache.lucene.search.TestSloppyPhraseQuery2.testRandomIncreasingSloppiness
>
> Error Message:
> index=2
>
> Stack Trace:
> java.lang.AssertionError: index=2
> at
> __randomizedtesting.SeedInfo.seed([8E6186F1E77310CD:8E5950100B61AF76]:0)
> at org.apache.lucene.util.FixedBitSet.get(FixedBitSet.java:193)
> at
> org.apache.lucene.search.SloppyPhraseScorer.advanceRpts(SloppyPhraseScorer.java:176)
> at
> org.apache.lucene.search.SloppyPhraseScorer.phraseFreq(SloppyPhraseScorer.java:110)
> at
> org.apache.lucene.search.SloppyPhraseScorer.advance(SloppyPhraseScorer.java:588)
> at
> org.apache.lucene.search.SloppyPhraseScorer.nextDoc(SloppyPhraseScorer.java:567)
> at
> org.apache.lucene.index.FilterAtomicReader$FilterDocsEnum.nextDoc(FilterAtomicReader.java:245)
> at
> org.apache.lucene.index.AssertingAtomicReader$AssertingDocsEnum.nextDoc(AssertingAtomicReader.java:252)
> at
> org.apache.lucene.search.AssertingScorer.nextDoc(AssertingScorer.java:175)
> at org.apache.lucene.search.Scorer.score(Scorer.java:64)
> at
> org.apache.lucene.search.AssertingScorer.score(AssertingScorer.java:136)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
> at
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:93)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
> at
> org.apache.lucene.search.SearchEquivalenceTestBase.assertSubsetOf(SearchEquivalenceTestBase.java:183)
> at
> org.apache.lucene.search.SearchEquivalenceTestBase.assertSubsetOf(SearchEquivalenceTestBase.java:161)
> at
> org.apache.lucene.search.TestSloppyPhraseQuery2.testRandomIncreasingSloppiness(TestSloppyPhraseQuery2.java:183)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(Sys

[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897923#comment-13897923
 ] 

ASF subversion and git services commented on LUCENE-5440:
-

Commit 1567183 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1567183 ]

LUCENE-5440: add back elasticity assumptions

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12501 - Failure!

2014-02-11 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12501/

1 tests failed.
REGRESSION:  
org.apache.lucene.search.TestSloppyPhraseQuery2.testRandomIncreasingSloppiness

Error Message:
index=2

Stack Trace:
java.lang.AssertionError: index=2
at 
__randomizedtesting.SeedInfo.seed([8E6186F1E77310CD:8E5950100B61AF76]:0)
at org.apache.lucene.util.FixedBitSet.get(FixedBitSet.java:193)
at 
org.apache.lucene.search.SloppyPhraseScorer.advanceRpts(SloppyPhraseScorer.java:176)
at 
org.apache.lucene.search.SloppyPhraseScorer.phraseFreq(SloppyPhraseScorer.java:110)
at 
org.apache.lucene.search.SloppyPhraseScorer.advance(SloppyPhraseScorer.java:588)
at 
org.apache.lucene.search.SloppyPhraseScorer.nextDoc(SloppyPhraseScorer.java:567)
at 
org.apache.lucene.index.FilterAtomicReader$FilterDocsEnum.nextDoc(FilterAtomicReader.java:245)
at 
org.apache.lucene.index.AssertingAtomicReader$AssertingDocsEnum.nextDoc(AssertingAtomicReader.java:252)
at 
org.apache.lucene.search.AssertingScorer.nextDoc(AssertingScorer.java:175)
at org.apache.lucene.search.Scorer.score(Scorer.java:64)
at 
org.apache.lucene.search.AssertingScorer.score(AssertingScorer.java:136)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:93)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSubsetOf(SearchEquivalenceTestBase.java:183)
at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSubsetOf(SearchEquivalenceTestBase.java:161)
at 
org.apache.lucene.search.TestSloppyPhraseQuery2.testRandomIncreasingSloppiness(TestSloppyPhraseQuery2.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.

[jira] [Commented] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897896#comment-13897896
 ] 

Timothy Potter commented on SOLR-5653:
--

U ... why didn't I think of that Alan! Thanks ... will do that instead.

> Create a RESTManager to provide REST API endpoints for reconfigurable plugins
> -
>
> Key: SOLR-5653
> URL: https://issues.apache.org/jira/browse/SOLR-5653
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Steve Rowe
> Attachments: SOLR-5653.patch
>
>
> It should be possible to reconfigure Solr plugins' resources and init params 
> without directly editing the serialized schema or {{solrconfig.xml}} (see 
> Hoss's arguments about this in the context of the schema, which also apply to 
> {{solrconfig.xml}}, in the description of SOLR-4658)
> The RESTManager should allow plugins declared in either the schema or in 
> {{solrconfig.xml}} to register one or more REST endpoints, one endpoint per 
> reconfigurable resource, including init params.  To allow for multiple plugin 
> instances, registering plugins will need to provide a handle of some form to 
> distinguish the instances.
> This RESTManager should also be able to create new instances of plugins that 
> it has been configured to allow.  The RESTManager will need its own 
> serialized configuration to remember these plugin declarations.
> Example endpoints:
> * SynonymFilterFactory
> ** init params: {{/solr/collection1/config/syns/myinstance/options}}
> ** synonyms resource: 
> {{/solr/collection1/config/syns/myinstance/synonyms-list}}
> * "/select" request handler
> ** init params: {{/solr/collection1/config/requestHandlers/select/options}}
> We should aim for full CRUD over init params and structured resources.  The 
> plugins will bear responsibility for handling resource modification requests, 
> though we should provide utility methods to make this easy.
> However, since we won't be directly modifying the serialized schema and 
> {{solrconfig.xml}}, anything configured in those two places can't be 
> invalidated by configuration serialized elsewhere.  As a result, it won't be 
> possible to remove plugins declared in the serialized schema or 
> {{solrconfig.xml}}.  Similarly, any init params declared in either place 
> won't be modifiable.  Instead, there should be some form of init param that 
> declares that the plugin is reconfigurable, maybe using something like 
> "managed" - note that request handlers already provide a "handle" - the 
> request handler name - and so don't need that to be separately specified:
> {code:xml}
> 
>
> 
> {code}
> and in the serialized schema - a handle needs to be specified here:
> {code:xml}
>  positionIncrementGap="100">
> ...
>   
> 
> 
> ...
> {code}
> All of the above examples use the existing plugin factory class names, but 
> we'll have to create new RESTManager-aware classes to handle registration 
> with RESTManager.
> Core/collection reloading should not be performed automatically when a REST 
> API call is made to one of these RESTManager-mediated REST endpoints, since 
> for batched config modifications, that could take way too long.  But maybe 
> reloading could be a query parameter to these REST API calls. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-11 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5437:


Attachment: (was: LUCENE-5437.patch)

> ASCIIFoldingFilter that emits both unfolded and folded tokens
> -
>
> Key: LUCENE-5437
> URL: https://issues.apache.org/jira/browse/LUCENE-5437
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nik Everett
>Assignee: Simon Willnauer
>Priority: Minor
> Attachments: LUCENE-5437.patch
>
>
> I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
> tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-11 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5437:


Attachment: LUCENE-5437.patch

Uploading new diff with changes Simon asked for.

> ASCIIFoldingFilter that emits both unfolded and folded tokens
> -
>
> Key: LUCENE-5437
> URL: https://issues.apache.org/jira/browse/LUCENE-5437
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nik Everett
>Assignee: Simon Willnauer
>Priority: Minor
> Attachments: LUCENE-5437.patch
>
>
> I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
> tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897826#comment-13897826
 ] 

Michael McCandless commented on LUCENE-5440:


bq. I think the @lucene.internal on FixedBitSet is a bug, the javadoc tag 
should be removed.

I don't think FixedBitSet should be external.

Our purpose here is to provide search APIs, not bitset utility APIs, and we 
should not have to commit to API back compatibility for this class or other 
such utility classes.


> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5234) Allow SolrResourceLoader to load resources from URLs

2014-02-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897813#comment-13897813
 ] 

Markus Jelsma commented on SOLR-5234:
-

Anything new on this one? We'd like to have http:// and file:// schema support 
in SolrResourceLoader.

> Allow SolrResourceLoader to load resources from URLs
> 
>
> Key: SOLR-5234
> URL: https://issues.apache.org/jira/browse/SOLR-5234
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5234.patch, SOLR-5234.patch
>
>
> This would allow multiple solr instance to share large configuration files.  
> It would also help resolve problems caused by attempting to store >1Mb files 
> in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5149) Query facet to respect mincount

2014-02-11 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-5149:


Attachment: SOLR-5149-trunk.patch

Eeeh, correct patch!

> Query facet to respect mincount
> ---
>
> Key: SOLR-5149
> URL: https://issues.apache.org/jira/browse/SOLR-5149
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.4
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 4.7
>
> Attachments: SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, 
> SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, SOLR-5149-trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5149) Query facet to respect mincount

2014-02-11 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-5149:


Attachment: (was: SOLR-5149-trunk.patch)

> Query facet to respect mincount
> ---
>
> Key: SOLR-5149
> URL: https://issues.apache.org/jira/browse/SOLR-5149
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.4
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 4.7
>
> Attachments: SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, 
> SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, SOLR-5149-trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5149) Query facet to respect mincount

2014-02-11 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-5149:


Attachment: SOLR-5149-trunk.patch

Apparently the diff for FacetComponent was missing. Updated patch!

> Query facet to respect mincount
> ---
>
> Key: SOLR-5149
> URL: https://issues.apache.org/jira/browse/SOLR-5149
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.4
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 4.7
>
> Attachments: SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, 
> SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, SOLR-5149-trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897786#comment-13897786
 ] 

Shai Erera commented on LUCENE-5440:


I don't mind if we do it only in trunk. However, this affects only the Java 
API, which looks pretty low-level and expert to me? Given that and that 
migrating from OpenBitSet to FixedBitSet is trivial, wouldn't it be OK to port 
it to 4x as well?

I'm thinking about e.g. merging changes from trunk to 4x, which will be much 
easier if the two are in sync. Of course this alone doesn't justify an API 
break, but if it's such low-level and expert API, I wonder if we shouldn't do 
this in 4x as well. 

Having said all that, you obviously understand Solr API better than me and know 
how it's used by users, so if you think we absolutely shouldn't do this in 4x, 
we'll do it only in trunk.

> Add LongFixedBitSet and replace usage of OpenBitSet
> ---
>
> Key: LUCENE-5440
> URL: https://issues.apache.org/jira/browse/LUCENE-5440
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
> LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch
>
>
> Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
> wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
> more than 2.1B bits. It overcome some issues I've encountered with 
> OpenBitSet, such as the use of set/fastSet as well the implementation of 
> DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5146) Figure out what it would take for lazily-loaded cores to play nice with SolrCloud

2014-02-11 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897763#comment-13897763
 ] 

Shalin Shekhar Mangar commented on SOLR-5146:
-

Sure Erick. I've been studying the code for transient cores to understand the 
potential issues and bottlenecks with using this feature with SolrCloud.

I think we can break it down to four major features:
# Make loadOnStartup=true work with SolrCloud shards - slices are marked with 
loadOnStartup=false. All nodes are woken up on a request.
# Make transient replicas work with SolrCloud replication - Leader is always 
active, replicas are down. Leader can send 'requestrecovery' to replicas based 
on maxDocs/maxTime parameters to make them sync. Maybe we can make peersync and 
buffer doc counts configurable.
# Make transient leaders work with SolrCloud - Down leaders are okay. Replicas 
may still be up but we won't force leader election. Leaders are woken up only 
on a write request.
# Optimize leader election for transient shards - We probably don't want to 
force leader election each time a shard wakes up. Instead clusterstate can 
remain the truth and leaders can go down. If a shard is woken up again, it can 
use the same leader until it goes down. This is far away. We shall focus on 
optimization later.

I think an easy win here would be to translate loadOnStartup=false to complete 
shards (leader+replicas). I'm going to start building a prototype and see how 
easy it turns out to be :)

> Figure out what it would take for lazily-loaded cores to play nice with 
> SolrCloud
> -
>
> Key: SOLR-5146
> URL: https://issues.apache.org/jira/browse/SOLR-5146
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.5, 5.0
>Reporter: Erick Erickson
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0
>
>
> The whole lazy-load core thing was implemented with non-SolrCloud use-cases 
> in mind. There are several user-list threads that ask about using lazy cores 
> with SolrCloud, especially in multi-tenant use-cases.
> This is a marker JIRA to investigate what it would take to make lazy-load 
> cores play nice with SolrCloud. It's especially interesting how this all 
> works with shards, replicas, leader election, recovery, etc.
> NOTE: This is pretty much totally unexplored territory. It may be that a few 
> trivial modifications are all that's needed. OTOH, It may be that we'd have 
> to rip apart SolrCloud to handle this case. Until someone dives into the 
> code, we don't know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5689) On reconnect, ZkController cancels election on first context rather than latest

2014-02-11 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5689.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0
 Assignee: Shalin Shekhar Mangar

Thanks Gregory!

> On reconnect, ZkController cancels election on first context rather than 
> latest
> ---
>
> Key: SOLR-5689
> URL: https://issues.apache.org/jira/browse/SOLR-5689
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Gregory Chanan
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5689.patch
>
>
> I haven't tested this yet, so I could be wrong, but this is my reading of the 
> code:
> During init:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient, overseer, 
> getNodeName());
> overseerElector.setup(context);
> overseerElector.joinElection(context, false);
> {code}
> On reconnect:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient,overseer, 
> getNodeName());
>   
> ElectionContext prevContext = overseerElector.getContext();
> if (prevContext != null) {
>   prevContext.cancelElection();
> }
>   
> overseerElector.joinElection(context, true);
> {code}
> setup doesn't appear to be called on reconnect, so the new context is never 
> set and the first context gets cancelled over and over.
> A call to overseerElector.setup(context); before joinElection in the 
> reconnect case would address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5689) On reconnect, ZkController cancels election on first context rather than latest

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897737#comment-13897737
 ] 

ASF subversion and git services commented on SOLR-5689:
---

Commit 1567050 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1567050 ]

SOLR-5689: On reconnect, ZkController cancels election on first context rather 
than latest

> On reconnect, ZkController cancels election on first context rather than 
> latest
> ---
>
> Key: SOLR-5689
> URL: https://issues.apache.org/jira/browse/SOLR-5689
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Gregory Chanan
> Attachments: SOLR-5689.patch
>
>
> I haven't tested this yet, so I could be wrong, but this is my reading of the 
> code:
> During init:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient, overseer, 
> getNodeName());
> overseerElector.setup(context);
> overseerElector.joinElection(context, false);
> {code}
> On reconnect:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient,overseer, 
> getNodeName());
>   
> ElectionContext prevContext = overseerElector.getContext();
> if (prevContext != null) {
>   prevContext.cancelElection();
> }
>   
> overseerElector.joinElection(context, true);
> {code}
> setup doesn't appear to be called on reconnect, so the new context is never 
> set and the first context gets cancelled over and over.
> A call to overseerElector.setup(context); before joinElection in the 
> reconnect case would address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5689) On reconnect, ZkController cancels election on first context rather than latest

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897736#comment-13897736
 ] 

ASF subversion and git services commented on SOLR-5689:
---

Commit 1567049 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1567049 ]

SOLR-5689: On reconnect, ZkController cancels election on first context rather 
than latest

> On reconnect, ZkController cancels election on first context rather than 
> latest
> ---
>
> Key: SOLR-5689
> URL: https://issues.apache.org/jira/browse/SOLR-5689
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Gregory Chanan
> Attachments: SOLR-5689.patch
>
>
> I haven't tested this yet, so I could be wrong, but this is my reading of the 
> code:
> During init:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient, overseer, 
> getNodeName());
> overseerElector.setup(context);
> overseerElector.joinElection(context, false);
> {code}
> On reconnect:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient,overseer, 
> getNodeName());
>   
> ElectionContext prevContext = overseerElector.getContext();
> if (prevContext != null) {
>   prevContext.cancelElection();
> }
>   
> overseerElector.joinElection(context, true);
> {code}
> setup doesn't appear to be called on reconnect, so the new context is never 
> set and the first context gets cancelled over and over.
> A call to overseerElector.setup(context); before joinElection in the 
> reconnect case would address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5689) On reconnect, ZkController cancels election on first context rather than latest

2014-02-11 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5689:


Attachment: SOLR-5689.patch

Trivial fix attached. I'll commit once the test suite succeeds.

> On reconnect, ZkController cancels election on first context rather than 
> latest
> ---
>
> Key: SOLR-5689
> URL: https://issues.apache.org/jira/browse/SOLR-5689
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Gregory Chanan
> Attachments: SOLR-5689.patch
>
>
> I haven't tested this yet, so I could be wrong, but this is my reading of the 
> code:
> During init:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient, overseer, 
> getNodeName());
> overseerElector.setup(context);
> overseerElector.joinElection(context, false);
> {code}
> On reconnect:
> {code}
> ElectionContext context = new OverseerElectionContext(zkClient,overseer, 
> getNodeName());
>   
> ElectionContext prevContext = overseerElector.getContext();
> if (prevContext != null) {
>   prevContext.cancelElection();
> }
>   
> overseerElector.joinElection(context, true);
> {code}
> setup doesn't appear to be called on reconnect, so the new context is never 
> set and the first context gets cancelled over and over.
> A call to overseerElector.setup(context); before joinElection in the 
> reconnect case would address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-02-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897716#comment-13897716
 ] 

Markus Jelsma edited comment on SOLR-4260 at 2/11/14 10:52 AM:
---

-ignore- apparently one node did not receive the update. forgive my stupidity


was (Author: markus17):
[~markrmil...@gmail.com] I just checked out the shards again, on one cluster, 
on replica has 1 document more (or less). They are out of sync again. I can 
open a new issue but it's really the same discussion as here. What do you 
think, reopen or new?

edit: the state of the document is deleted, it seems the delete did not happen 
on one replica.

> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6.1, 5.0
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-02-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897716#comment-13897716
 ] 

Markus Jelsma edited comment on SOLR-4260 at 2/11/14 10:48 AM:
---

[~markrmil...@gmail.com] I just checked out the shards again, on one cluster, 
on replica has 1 document more (or less). They are out of sync again. I can 
open a new issue but it's really the same discussion as here. What do you 
think, reopen or new?

edit: the state of the document is deleted, it seems the delete did not happen 
on one replica.


was (Author: markus17):
[~markrmil...@gmail.com] I just checked out the shards again, on one cluster, 
on replica has 1 document more (or less). They are out of sync again. I can 
open a new issue but it's really the same discussion as here. What do you 
think, reopen or new?


> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6.1, 5.0
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-02-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897716#comment-13897716
 ] 

Markus Jelsma commented on SOLR-4260:
-

[~markrmil...@gmail.com] I just checked out the shards again, on one cluster, 
on replica has 1 document more (or less). They are out of sync again. I can 
open a new issue but it's really the same discussion as here. What do you 
think, reopen or new?


> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6.1, 5.0
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_60-ea-b04) - Build # 3768 - Still Failing!

2014-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3768/
Java: 64bit/jdk1.7.0_60-ea-b04 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.schema.TestFieldTypeResource

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 10,553,784 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 10,558,848 bytes, protected static 
org.apache.solr.util.RestTestHarness 
org.apache.solr.util.RestTestBase.restTestHarness   - 448 bytes, private static 
java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.nonEscapedSingleQuotePattern   - 328 bytes, 
public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules   - 312 bytes, private static 
java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.escapedSingleQuotePattern   - 248 bytes, 
protected static java.lang.String org.apache.solr.SolrTestCaseJ4.testSolrHome   
- 144 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp   - 80 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.coreName   - 72 bytes, public 
static java.lang.String org.apache.solr.SolrJettyTestBase.context

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 10,553,784 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 10,558,848 bytes, protected static org.apache.solr.util.RestTestHarness 
org.apache.solr.util.RestTestBase.restTestHarness
  - 448 bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.nonEscapedSingleQuotePattern
  - 328 bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules
  - 312 bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.escapedSingleQuotePattern
  - 248 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.testSolrHome
  - 144 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp
  - 80 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName
  - 72 bytes, public static java.lang.String 
org.apache.solr.SolrJettyTestBase.context
at __randomizedtesting.SeedInfo.seed([7630BB32E0F43AE3]:0)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:127)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 10178 lines...]
   [junit4] Suite: org.apache.solr.rest.schema.TestFieldTypeResource
   [junit4]   2> 1402514 T4529 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 1402518 T4529 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\.\solrtest-TestFieldTypeResource-1392109798534
   [junit4]   2> 1402521 T4529 oas.SolrTestCaseJ4.initCore initCore end
   [junit4]   2> 1402521 T4529 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 1402530 T4529 oejs.AbstractConnector.doStart Started 
SelectChannelConnector@127.0.0.1:57819
   [junit4]   2> 1402531 T4529 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()
   [junit4]   2> 1402531 T4529 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2> 1402531 T4529 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr
   [junit4]   2> 1402532 T4529 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\'
   [junit4]   2> 1402567 T4529 oasc.ConfigSolr.fromFile Loading container 
configuration from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\solr.xml
   [junit4]   2> 1402668 T4529 oasc.CoreContainer. New CoreContainer 
101286608
   [junit4]   2> 1402668 T4529 oasc.CoreContainer.load Loading cores into 
CoreContainer

[jira] [Commented] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897670#comment-13897670
 ] 

Alan Woodward commented on SOLR-5653:
-

Or maybe add a getStorageIO() method to SolrResourceLoader itself?  We already 
abstract a lot of 'is this stuff in zk or on the file system' questions into 
here.

> Create a RESTManager to provide REST API endpoints for reconfigurable plugins
> -
>
> Key: SOLR-5653
> URL: https://issues.apache.org/jira/browse/SOLR-5653
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Steve Rowe
> Attachments: SOLR-5653.patch
>
>
> It should be possible to reconfigure Solr plugins' resources and init params 
> without directly editing the serialized schema or {{solrconfig.xml}} (see 
> Hoss's arguments about this in the context of the schema, which also apply to 
> {{solrconfig.xml}}, in the description of SOLR-4658)
> The RESTManager should allow plugins declared in either the schema or in 
> {{solrconfig.xml}} to register one or more REST endpoints, one endpoint per 
> reconfigurable resource, including init params.  To allow for multiple plugin 
> instances, registering plugins will need to provide a handle of some form to 
> distinguish the instances.
> This RESTManager should also be able to create new instances of plugins that 
> it has been configured to allow.  The RESTManager will need its own 
> serialized configuration to remember these plugin declarations.
> Example endpoints:
> * SynonymFilterFactory
> ** init params: {{/solr/collection1/config/syns/myinstance/options}}
> ** synonyms resource: 
> {{/solr/collection1/config/syns/myinstance/synonyms-list}}
> * "/select" request handler
> ** init params: {{/solr/collection1/config/requestHandlers/select/options}}
> We should aim for full CRUD over init params and structured resources.  The 
> plugins will bear responsibility for handling resource modification requests, 
> though we should provide utility methods to make this easy.
> However, since we won't be directly modifying the serialized schema and 
> {{solrconfig.xml}}, anything configured in those two places can't be 
> invalidated by configuration serialized elsewhere.  As a result, it won't be 
> possible to remove plugins declared in the serialized schema or 
> {{solrconfig.xml}}.  Similarly, any init params declared in either place 
> won't be modifiable.  Instead, there should be some form of init param that 
> declares that the plugin is reconfigurable, maybe using something like 
> "managed" - note that request handlers already provide a "handle" - the 
> request handler name - and so don't need that to be separately specified:
> {code:xml}
> 
>
> 
> {code}
> and in the serialized schema - a handle needs to be specified here:
> {code:xml}
>  positionIncrementGap="100">
> ...
>   
> 
> 
> ...
> {code}
> All of the above examples use the existing plugin factory class names, but 
> we'll have to create new RESTManager-aware classes to handle registration 
> with RESTManager.
> Core/collection reloading should not be performed automatically when a REST 
> API call is made to one of these RESTManager-mediated REST endpoints, since 
> for batched config modifications, that could take way too long.  But maybe 
> reloading could be a query parameter to these REST API calls. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 516 - Failure

2014-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/516/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([74EC348EDA8D0EC:28E767593A279FE8]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 11645 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest
   [junit4]   2> 110200 T8079 oas.SolrTes

[jira] [Commented] (SOLR-5609) Don't let cores create slices/named replicas

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897645#comment-13897645
 ] 

ASF subversion and git services commented on SOLR-5609:
---

Commit 1567014 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1567014 ]

SOLR-5609 Reverting accidental commit

> Don't let cores create slices/named replicas
> 
>
> Key: SOLR-5609
> URL: https://issues.apache.org/jira/browse/SOLR-5609
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
> Fix For: 5.0, 4.7
>
>
> In SolrCloud, it is possible for a core to come up in any node , and register 
> itself with an arbitrary slice/coreNodeName. This is a legacy requirement and 
> we would like to make it only possible for Overseer to initiate creation of 
> slice/replicas
> We plan to introduce cluster level properties at the top level
> /cluster-props.json
> {code:javascript}
> {
> "noSliceOrReplicaByCores":true"
> }
> {code}
> If this property is set to true, cores won't be able to send STATE commands 
> with unknown slice/coreNodeName . Those commands will fail at Overseer. This 
> is useful for SOLR-5310 / SOLR-5311 where a core/replica is deleted by a 
> command and  it comes up later and tries to create a replica/slice



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897642#comment-13897642
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1567012 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1567012 ]

SOLR-5476 added testcase

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4777) Handle SliceState in the Admin UI

2014-02-11 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897630#comment-13897630
 ] 

Stefan Matheis (steffkes) commented on SOLR-4777:
-

[~shalinmangar] this week is a bit busy, but i'll see what i can do .. at least 
a quick look should be possible :)

> Handle SliceState in the Admin UI
> -
>
> Key: SOLR-4777
> URL: https://issues.apache.org/jira/browse/SOLR-4777
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud, web gui
>Affects Versions: 4.3
>Reporter: Anshum Gupta
> Fix For: 4.7
>
> Attachments: SOLR-4777.patch
>
>
> The Solr admin UI as of now does take Slice state into account.
> We need to have that differentiated.
> There are three states:
> # The default is "active"
> # "construction" (used during shard splitting for new sub shards),
> # 'recovery' (state is changed from construction to recovery once split is 
> complete and we are waiting for sub-shard replicas to recover from their 
> respective leaders), and
> # "inactive" - the parent shard is set to this state after split is complete
> A slice/shard which is "inactive" will not accept traffic (i.e. it will 
> re-route traffic to sub shards) even though the nodes inside this shard show 
> up as green.
> We should show the "inactive" shards in a different color to highlight this 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org