Re: Problem with using classes from different modules

2013-07-04 Thread Johan Jonkers
I've tried added the Sum class to the 2nd JCC call but it didn't solve 
the problem. The 'asSum' method isn't getting wrapped. Also, 
instantiating a SumWrapper with a Sum as argument results in the 
constructor without parameters being called; which I wasn't expecting.


 c=SumWrapper(Sum(10))
Empty constructor


I tried compiling everything into one file and then I do get the asSum 
method wrapped but the constructor with a Sum as argument doesn't seem 
to work still. It doesn't call the empty constructor anymore but neither 
does it seem to set Sum object passed to it. I am a bit at a loss here 
on whats going wrong (or what I am doing wrong).


Any thoughts on this problem would be appreciated :-)

Regards,

Johan

On 7/1/13 8:00 PM, Andi Vajda wrote:


On Mon, 1 Jul 2013, Johan Jonkers wrote:


Hello,

I have been playing around with JCC to see if it would provide in the 
needs we have here at work to interface Java with Python. I have 
encountered one issue in which I hope someone on this mailinglist 
might be able to help me with. If this is not the right place to ask 
then I apologize in advance.


This issue I am having is that I would like to create two packages 
compiled with JCC in which classes from one package are used by 
classes in the other pacakge. I would like to use those classes in 
Python but am having problems doing so that I don't understand yet.


In package 1 is the class shown below:

package nl.seecr.freestyle;

public class Sum {
private int _sum;

public Sum() {
_sum = 0;
}

public void add(int value) {
_sum += value;
}

public int value() {
return _sum;
}
}

The second package holds a class what uses the Sum Class:

package org.cq2.freestyle;

import nl.seecr.freestyle.Sum;

public class SumWrapper {

private Sum total;

public SumWrapper() {
this(new Sum());
System.out.println(Empty constructor);
}

public SumWrapper(Sum sum) {
total = sum;
}

public void add(int value) {
total.add(value);
}

public int value() {
return total.value();
}

public Sum asSum() {
Sum sum = new Sum();
sum.add(value());
return sum;
}

public void printValue() {
System.out.println(value());
}
}

I can compile these classes into .class files and put them in jars 
and have those compiled with JCC:

python -m jcc \
--root ${ROOT} \
--use_full_names \
--shared \
--arch x86_64 \
--jar cq2.jar \
--classpath ./seecr.jar \
--python cq2 \
--build \
--install

export PYTHON_PATH=${ROOT}/usr/local/lib/python2.7/dist-packages
python -m jcc \
--root ${ROOT} \
--use_full_names \
--import cq2 \
--shared \
--arch x86_64 \
--jar seecr.jar \
--python seecr \
--build \
--install

In my understanding the --import cq2 argument should prevent jcc 
from creating a wrapper for the Sum class in the seecr package itself 
but use the one in the cq2 package.


This all compiles without errors but when I run the following python 
program:


import seecr
seecr.initVM()

import cq2
cq2.initVM()

from nl.seecr.freestyle import Sum
from org.cq2.freestyle import SumWrapper

sum = Sum()
sum.add(5)
print Sum value, sum.value()

wrapper = SumWrapper(sum)
print wrapper.value()

The 1st print shows the value 5 as expected. The 2nd print however 
shows 0 and I had not expected that. If I run the same program in 
Java and use the jars I created earlier, the 2nd print shows 5 (as 
expected). The Empty constructor message is also shown when running 
this python program and I had not expected that to happen.


The asSum method in the SumWrapper class is not available in the 
Python version of the class. I do not understand yet why that is.


I haven't been able to find many examples or documentation on the 
options for compiling with JCC. I am hoping that someone here on the 
mailinglist can point me in the right direction. Any help would be 
really appreciated.


I haven't had time yet to reproduce the problem but you may want to 
try to add an explicit request to wrap the Sum class - by just listing 
it on the second jcc call command line. No wrapper will be generated 
for it, because of the --import statement, but methods in the second 
jar with Sum in their signature should then get wrapped.


python -m jcc \
--root ${ROOT} \
--use_full_names \
--import cq2 \
--shared \
--arch x86_64 \
--jar seecr.jar \
--python seecr \
--build \
--install \
nl.seecr.freestyle.Sum

Andi..



--
Johan Jonkers ♦ seecr.nl ♦ +31 (0) 655 734 175



Re: Problem with using classes from different modules

2013-07-04 Thread Andi Vajda


On Thu, 4 Jul 2013, Johan Jonkers wrote:

I've tried added the Sum class to the 2nd JCC call but it didn't solve the 
problem. The 'asSum' method isn't getting wrapped. Also, instantiating a 
SumWrapper with a Sum as argument results in the constructor without 
parameters being called; which I wasn't expecting.



c=SumWrapper(Sum(10))

Empty constructor




I tried compiling everything into one file and then I do get the asSum method 
wrapped but the constructor with a Sum as argument doesn't seem to work 
still. It doesn't call the empty constructor anymore but neither does it seem 
to set Sum object passed to it. I am a bit at a loss here on whats going 
wrong (or what I am doing wrong).


Yeah, let's take one thing at a time and the simpler one first.
Compiling all into one module, I was not able to reproduce the problem as 
reported. I'm able to make a SumWrapper(Sum(10)) just fine.


Here are the commands I used to try to reproduce this:

  - created two class files Sum.java and SumWrapper.java in their respective
packages as specified in your example
  - mkdir classes
  - javac -d classes *.java
  - jar -cvf sum.jar -C classes .
  - python -m jcc.__main__ --shared --arch x86_64 --use_full_names --jar 
sum.jar --classpath . --python sum --build --install
  - python
  - import sum
  - sum.initVM()
jcc.JCCEnv object at 0x10029c0f0
  - from nl.seecr.freestyle import Sum
  - from org.cq2.freestyle import SumWrapper
  - Sum(10)
Sum: nl.seecr.freestyle.Sum@64fef26a
  - SumWrapper(Sum(10))
SumWrapper: org.cq2.freestyle.SumWrapper@70e69696

Please try to reproduce these steps and report back.
Once that works, let's move on to the problem of compiling these into 
separate extension modules.


Andi..



Any thoughts on this problem would be appreciated :-)

Regards,

Johan

On 7/1/13 8:00 PM, Andi Vajda wrote:


On Mon, 1 Jul 2013, Johan Jonkers wrote:


Hello,

I have been playing around with JCC to see if it would provide in the 
needs we have here at work to interface Java with Python. I have 
encountered one issue in which I hope someone on this mailinglist might be 
able to help me with. If this is not the right place to ask then I 
apologize in advance.


This issue I am having is that I would like to create two packages 
compiled with JCC in which classes from one package are used by classes in 
the other pacakge. I would like to use those classes in Python but am 
having problems doing so that I don't understand yet.


In package 1 is the class shown below:

package nl.seecr.freestyle;

public class Sum {
private int _sum;

public Sum() {
_sum = 0;
}

public void add(int value) {
_sum += value;
}

public int value() {
return _sum;
}
}

The second package holds a class what uses the Sum Class:

package org.cq2.freestyle;

import nl.seecr.freestyle.Sum;

public class SumWrapper {

private Sum total;

public SumWrapper() {
this(new Sum());
System.out.println(Empty constructor);
}

public SumWrapper(Sum sum) {
total = sum;
}

public void add(int value) {
total.add(value);
}

public int value() {
return total.value();
}

public Sum asSum() {
Sum sum = new Sum();
sum.add(value());
return sum;
}

public void printValue() {
System.out.println(value());
}
}

I can compile these classes into .class files and put them in jars and 
have those compiled with JCC:

python -m jcc \
--root ${ROOT} \
--use_full_names \
--shared \
--arch x86_64 \
--jar cq2.jar \
--classpath ./seecr.jar \
--python cq2 \
--build \
--install

export PYTHON_PATH=${ROOT}/usr/local/lib/python2.7/dist-packages
python -m jcc \
--root ${ROOT} \
--use_full_names \
--import cq2 \
--shared \
--arch x86_64 \
--jar seecr.jar \
--python seecr \
--build \
--install

In my understanding the --import cq2 argument should prevent jcc from 
creating a wrapper for the Sum class in the seecr package itself but use 
the one in the cq2 package.


This all compiles without errors but when I run the following python 
program:


import seecr
seecr.initVM()

import cq2
cq2.initVM()

from nl.seecr.freestyle import Sum
from org.cq2.freestyle import SumWrapper

sum = Sum()
sum.add(5)
print Sum value, sum.value()

wrapper = SumWrapper(sum)
print wrapper.value()

The 1st print shows the value 5 as expected. The 2nd print however shows 0 
and I had not expected that. If I run the same program in Java and use the 
jars I created earlier, the 2nd print shows 5 (as expected). The Empty 
constructor message is also shown when running this python program and I 
had not expected that to happen.


The asSum method in the SumWrapper class is not available in the Python 
version of the class. I do not understand yet why that is.


I haven't been able to find many examples or documentation on the options 
for compiling with JCC. I am hoping that someone here on the mailinglist 
can point me in the right direction. Any help would be really appreciated.


I haven't had time yet to reproduce the problem but you may want to try to 
add an explicit request to wrap the 

[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699788#comment-13699788
 ] 

Uwe Schindler commented on SOLR-5002:
-

bq. one trick is we cant actually use FilteredQuery because of some sneaky 
stuff in UninvertedField (which also uses this method). See the comment. 

I am nervous if this is a hidden bug in solr that could be affect other stuff. 
If a UninvertedField.doNegative() handles deleted documents in a wrong way and 
you are able to create a filter out of it, it could lead to wrong results, if 
the filter is applied down-low (random access).

 optimize numDocs(Query,DocSet) when filterCache is null
 ---

 Key: SOLR-5002
 URL: https://issues.apache.org/jira/browse/SOLR-5002
 Project: Solr
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: SOLR-5002.patch


 getDocSet(Query, DocSet) has this opto, but numDocs does not.
 Especially in this case, where we just want the intersection count, its 
 faster to do a filtered query with TotalHitCountCollector and not create 
 bitsets at all...
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2976) stats.facet no longer works on single valued trie fields that don't use precision step

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699790#comment-13699790
 ] 

Uwe Schindler commented on SOLR-2976:
-

+1, please remove useless TrieTokenizer! The only backside is, that you can no 
longer inspect trie tokens with AnalysisRequestHandler, but that's not really 
an issue, because numeric terms are an implementation detail :-) I just used it 
sometimes to demonstrate users how trie terms look like.

From ElasticSearch I know that they also added this Tokenizer (Adrien did it), 
but there it was done for highlighting. If this is the case in Solr, too, we 
should keep it. How is highlighting affected - or does highölighting on 
NumericFields does not work at all in Solr?

 stats.facet no longer works on single valued trie fields that don't use 
 precision step
 --

 Key: SOLR-2976
 URL: https://issues.apache.org/jira/browse/SOLR-2976
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.5
Reporter: Hoss Man
 Attachments: SOLR-2976_3.4_test.patch, SOLR-2976.patch, 
 SOLR-2976.patch


 As reported on the mailing list, 3.5 introduced a regression that prevents 
 single valued Trie fields that don't use precision steps (to add course 
 grained terms) from being used in stats.facet.
 two immediately obvious problems...
 1) in 3.5 the stats component is checking if isTokenzed() is true for the 
 field type (which is probably wise) but regardless of the precisionStep used, 
 TrieField.isTokenized is hardcoded to return true
 2) the 3.5 stats faceting will fail if the FieldType is multivalued - it 
 doesn't check if the SchemaField is configured to be single valued 
 (overriding the FieldType)
 so even if a user has something like this in their schema...
 {code}
 fieldType name=long class=solr.TrieLongField precisionStep=0 
 omitNorms=true /
 field name=ts type=long indexed=true stored=true required=true 
 multiValued=false /
 {code}
 ...stats.facet will not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2013-07-04 Thread Thomas Scheffler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699300#comment-13699300
 ] 

Thomas Scheffler edited comment on SOLR-1093 at 7/4/13 6:55 AM:


As said by [~dsmiley] in his [comment|#comment-13620283]
{quote}
I am highly in favor of a scripting request handler in which a script runs that 
submits the searches to Solr (in-VM) and can react to the results of one 
request before making another that is formulated dynamically, and can assemble 
the response data, potentially reducing both the latency and data that would 
move over the wire if this feature didn't exist.
{quote}

I have a use-case that currently requires for every search I do two more 
searches that depend on the result of the first search. Doing this on client 
side requires also two more network roundtrips and the overhead of preparing 
the searches. An efficient way to specify a script (external file or cdata 
section) could optimize the time for doing this and may even allow better 
caching.

Originally I came across this issue to combine the two later request into one. 
A scriptable RequestHandler would even save this request by moving some simple 
logic to SOLR.

  was (Author: thosch):
As said by [~dsmiley] in his [comment|#comment-13620283]
{quote}
I am highly in favor of a scripting request handler in which a script runs that 
submits the searches to Solr (in-VM) and can react to the results of one 
request before making another that is formulated dynamically, and can assemble 
the response data, potentially reducing both the latency and data that would 
move over the wire if this feature didn't exist.
{quote}

I have a use-case that currently requires for every search I do two more 
searches that depend on the result of the first search. Doing this on client 
side requires also two more network roundtrips and the overhead of preparing 
the searches. An efficient way to specify a script (external file or cdata 
section) could optimize the time for doing this and may even allow better 
caching.

Originally I came across this issue to combine the two later request into one. 
A scriptable RequestHandler would even save this request by moving some simply 
logic to SOLR.
  
 A RequestHandler to run multiple queries in a batch
 ---

 Key: SOLR-1093
 URL: https://issues.apache.org/jira/browse/SOLR-1093
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Noble Paul
Assignee: Simon Willnauer
 Attachments: SOLR-1093.patch


 It is a common requirement that a single page requires to fire multiple 
 queries .In cases where these queries are independent of each other. If there 
 is a handler which can take in multiple queries , run them in paralll and 
 send the response as one big chunk it would be useful
 Let us say the handler is  MultiRequestHandler
 {code}
 requestHandler name=/multi class=solr.MultiRequestHandler/
 {code}
 h2.Query Syntax
 The request must specify the no:of queries as count=n
 Each request parameter must be prefixed with a number which denotes the query 
 index.optionally ,it may can also specify the handler name.
 example
 {code}
 /multi?count=21.handler=/select1.q=a:b2.handler=/select2.q=a:c
 {code}
 default handler can be '/select' so the equivalent can be
 {code} 
 /multi?count=21.q=a:b2.q=a:c
 {code}
 h2.The response
 The response will be a ListNamedList where each NamedList will be a 
 response to a query. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5030) FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work correctly for 1-byte (like English) and multi-byte (non-Latin) letters

2013-07-04 Thread Artem Lukanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Lukanin updated LUCENE-5030:
--

Attachment: LUCENE-5030.patch

I have renamed the variables in comments and tests for consistency

 FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work 
 correctly for 1-byte (like English) and multi-byte (non-Latin) letters
 

 Key: LUCENE-5030
 URL: https://issues.apache.org/jira/browse/LUCENE-5030
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Artem Lukanin
Assignee: Michael McCandless
 Fix For: 5.0, 4.4

 Attachments: benchmark-INFO_SEP.txt, benchmark-old.txt, 
 benchmark-wo_convertion.txt, LUCENE-5030.patch, LUCENE-5030.patch, 
 LUCENE-5030.patch, LUCENE-5030.patch, nonlatin_fuzzySuggester1.patch, 
 nonlatin_fuzzySuggester2.patch, nonlatin_fuzzySuggester3.patch, 
 nonlatin_fuzzySuggester4.patch, nonlatin_fuzzySuggester_combo1.patch, 
 nonlatin_fuzzySuggester_combo2.patch, nonlatin_fuzzySuggester_combo.patch, 
 nonlatin_fuzzySuggester.patch, nonlatin_fuzzySuggester.patch, 
 nonlatin_fuzzySuggester.patch, run-suggest-benchmark.patch


 There is a limitation in the current FuzzySuggester implementation: it 
 computes edits in UTF-8 space instead of Unicode character (code point) 
 space. 
 This should be fixable: we'd need to fix TokenStreamToAutomaton to work in 
 Unicode character space, then fix FuzzySuggester to do the same steps that 
 FuzzyQuery does: do the LevN expansion in Unicode character space, then 
 convert that automaton to UTF-8, then intersect with the suggest FST.
 See the discussion here: 
 http://lucene.472066.n3.nabble.com/minFuzzyLength-in-FuzzySuggester-behaves-differently-for-English-and-Russian-td4067018.html#none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 310 - Failure

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/310/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3481, 
name=recoveryCmdExecutor-1326-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=3481, name=recoveryCmdExecutor-1326-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([EB33F0A9FDE326CA]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3481, name=recoveryCmdExecutor-1326-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at 

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 606 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/606/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
collection already exists: awholynewcollection_0

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
collection already exists: awholynewcollection_0
at 
__randomizedtesting.SeedInfo.seed([729D339EF98D35AB:F37BBD868ED25597]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:424)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:264)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:318)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1522)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:438)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-1045) Build Solr index using Hadoop MapReduce

2013-07-04 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699860#comment-13699860
 ] 

Furkan KAMACI commented on SOLR-1045:
-

Is there any improvement for that issue otherwise I can make a development for 
it?

 Build Solr index using Hadoop MapReduce
 ---

 Key: SOLR-1045
 URL: https://issues.apache.org/jira/browse/SOLR-1045
 Project: Solr
  Issue Type: New Feature
Reporter: Ning Li
 Fix For: 4.4

 Attachments: SOLR-1045.0.patch


 The goal is a contrib module that builds Solr index using Hadoop MapReduce.
 It is different from the Solr support in Nutch. The Solr support in Nutch 
 sends a document to a Solr server in a reduce task. Here, the goal is to 
 build/update Solr index within map/reduce tasks. Also, it achieves better 
 parallelism when the number of map tasks is greater than the number of reduce 
 tasks, which is usually the case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1758 - Failure

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1758/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2727, 
name=recoveryCmdExecutor-1524-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2727, name=recoveryCmdExecutor-1524-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([5DE8A14773E51918]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2727, name=recoveryCmdExecutor-1524-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at 

[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Shay Banon (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699892#comment-13699892
 ] 

Shay Banon commented on LUCENE-5086:


The Java version on the Mac is the latest one:

java version 1.6.0_51
Java(TM) SE Runtime Environment (build 1.6.0_51-b11-457-11M4509)
Java HotSpot(TM) 64-Bit Server VM (build 20.51-b01-457, mixed mode)

Regarding the catch, I think Throwable is the right exceptions to catch here. 
Catch all, who cares, you don't want a bug in the JVM that throws an unexpected 
runtime exception to cause Lucene to break the APP completely because its a 
static block, and I have been right there a few times. But if you feel 
differently, go ahead and change it to explicitly catch whats needed.

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss

 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_25) - Build # 3000 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3000/
Java: 64bit/jdk1.7.0_25 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=6539, 
name=TEST-TestCoreContainer.testSharedLib-seed#[9F8C33A3C78D0D47], 
state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=6539, 
name=TEST-TestCoreContainer.testSharedLib-seed#[9F8C33A3C78D0D47], 
state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below.
at __randomizedtesting.SeedInfo.seed([9F8C33A3C78D0D47]:0)
at java.lang.Thread.getStackTrace(Thread.java:1568)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545)
at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131)
at 
org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Comment Edited] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698956#comment-13698956
 ] 

Uwe Schindler edited comment on LUCENE-5086 at 7/4/13 10:26 AM:


I updated MacOSX to 1.6.0_51 a few days ago on the Jenkins Server, but no 
popups :-)

Apple's MacOSX Java also starts the AWT thread when loading the Java scripting 
framework, because there is one scripting language in the SPI list (something 
like ÄppleFooBarScriptingEngine which also initializes AWT, also ignoring 
awt.headless=true)... It's all horrible.

  was (Author: thetaphi):
I updated MacOSX to 1.6.0_45 a few days ago on the Jenkins Server, but no 
popups :-)

Apple's MacOSX Java also starts the AWT thread when loading the Java scripting 
framework, because there is one scripting language in the SPI list (something 
like ÄppleFooBarScriptingEngine which also initializes AWT, also ignoring 
awt.headless=true)... It's all horrible.
  
 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss

 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699983#comment-13699983
 ] 

Uwe Schindler commented on LUCENE-5086:
---

I agree, the other code in RAMUsageEstimator catches Exception, but not 
Throwable. I would use a similar chain of calls in RamUsageEstimator.

The Lucene tests use headless=true by default so are not affected by this.

I will provide a patch later.

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss

 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699989#comment-13699989
 ] 

Dawid Weiss commented on LUCENE-5086:
-

I remember about it but our mac died yesterday and my wife left the charger at 
work. I'll look into it tonight, unless Uwe is faster.

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss

 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1378 - Failure

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1378/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1638, 
name=recoveryCmdExecutor-594-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=1638, name=recoveryCmdExecutor-594-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([7D22930A2FD6F8D]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1638, name=recoveryCmdExecutor-594-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)

[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700014#comment-13700014
 ] 

Robert Muir commented on SOLR-5002:
---

I dont think so, but thats why I used a conjunction so I dont have to deal with 
it on this issue and can look at it separately.

 optimize numDocs(Query,DocSet) when filterCache is null
 ---

 Key: SOLR-5002
 URL: https://issues.apache.org/jira/browse/SOLR-5002
 Project: Solr
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: SOLR-5002.patch


 getDocSet(Query, DocSet) has this opto, but numDocs does not.
 Especially in this case, where we just want the intersection count, its 
 faster to do a filtered query with TotalHitCountCollector and not create 
 bitsets at all...
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2976) stats.facet no longer works on single valued trie fields that don't use precision step

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700016#comment-13700016
 ] 

Robert Muir commented on SOLR-2976:
---

Highlighting NumericFields is disabled in solr. This patch is not related to 
that.

{code}
// TODO: Currently in trunk highlighting numeric fields is broken (Lucene) -
// so we disable them until fixed (see LUCENE-3080)!
// BEGIN: Hack
final SchemaField schemaField = schema.getFieldOrNull(fieldName);
if (schemaField != null  (
  (schemaField.getType() instanceof org.apache.solr.schema.TrieField) ||
  (schemaField.getType() instanceof org.apache.solr.schema.TrieDateField)
)) return;
// END: Hack
{code}


 stats.facet no longer works on single valued trie fields that don't use 
 precision step
 --

 Key: SOLR-2976
 URL: https://issues.apache.org/jira/browse/SOLR-2976
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.5
Reporter: Hoss Man
 Attachments: SOLR-2976_3.4_test.patch, SOLR-2976.patch, 
 SOLR-2976.patch


 As reported on the mailing list, 3.5 introduced a regression that prevents 
 single valued Trie fields that don't use precision steps (to add course 
 grained terms) from being used in stats.facet.
 two immediately obvious problems...
 1) in 3.5 the stats component is checking if isTokenzed() is true for the 
 field type (which is probably wise) but regardless of the precisionStep used, 
 TrieField.isTokenized is hardcoded to return true
 2) the 3.5 stats faceting will fail if the FieldType is multivalued - it 
 doesn't check if the SchemaField is configured to be single valued 
 (overriding the FieldType)
 so even if a user has something like this in their schema...
 {code}
 fieldType name=long class=solr.TrieLongField precisionStep=0 
 omitNorms=true /
 field name=ts type=long indexed=true stored=true required=true 
 multiValued=false /
 {code}
 ...stats.facet will not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4111 - Failure

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4111/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=552, 
name=recoveryCmdExecutor-137-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=552, name=recoveryCmdExecutor-137-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([1C86F9549CDD9813]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=552, name=recoveryCmdExecutor-137-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6421 - Failure!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6421/
Java: 64bit/jdk1.8.0-ea-b94 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.TestRandomDVFaceting

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.TestRandomDVFaceting: 
1) Thread[id=4381, name=IPC Parameter Sending Thread #2, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)   
  at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.TestRandomDVFaceting: 
   1) Thread[id=4381, name=IPC Parameter Sending Thread #2, 
state=TIMED_WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([FE05332022582F89]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.TestRandomDVFaceting

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4381, name=IPC Parameter Sending Thread #2, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)   
  at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4381, name=IPC Parameter Sending Thread #2, 
state=TIMED_WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([FE05332022582F89]:0)




Build Log:
[...truncated 51165 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:386: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:366: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:39: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:181: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:443: 
The following error occurred while executing this line:

[jira] [Created] (SOLR-5005) ScriptRequestHandler

2013-07-04 Thread David Smiley (JIRA)
David Smiley created SOLR-5005:
--

 Summary: ScriptRequestHandler
 Key: SOLR-5005
 URL: https://issues.apache.org/jira/browse/SOLR-5005
 Project: Solr
  Issue Type: New Feature
Reporter: David Smiley


A user customizable script based request handler would be very useful.  It's 
inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
could write a script that submits searches to Solr (in-VM) and can react to the 
results of one search before making another that is formulated dynamically.  
And it can assemble the response data, potentially reducing both the latency 
and data that would move over the wire if this feature didn't exist.  It could 
also be used to easily add a user-specifiable search API at the Solr server 
with request parameters governed by what the user wants to advertise -- 
especially useful within enterprises.  And, it could be used to enforce 
security requirements on allowable parameter valuables to Solr, so a javascript 
based Solr client could be allowed to talk to only a script based request 
handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b94) - Build # 6348 - Failure!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6348/
Java: 32bit/jdk1.8.0-ea-b94 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SliceStateUpdateTest.testSliceStateUpdate

Error Message:
expected:[in]active but was:[]active

Stack Trace:
org.junit.ComparisonFailure: expected:[in]active but was:[]active
at 
__randomizedtesting.SeedInfo.seed([D9ADB8C4FF227292:9CF3588A1F7A816]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.SliceStateUpdateTest.testSliceStateUpdate(SliceStateUpdateTest.java:185)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 10787 lines...]
BUILD FAILED

[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700049#comment-13700049
 ] 

Yonik Seeley commented on SOLR-4998:


bq. But would we want it to mean what Otis mentioned/understood?

I don't think so.  But at this point, since we've never, ever, gotten everyone 
to agree on all the terminology, 
perhaps I'll just wait for someone to put up a patch.

As Mark says, good luck ;-)

 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700059#comment-13700059
 ] 

ASF subversion and git services commented on LUCENE-3069:
-

Commit 1499744 from [~billy]
[ https://svn.apache.org/r1499744 ]

LUCENE-3069: writer part

 Lucene should have an entirely memory resident term dictionary
 --

 Key: LUCENE-3069
 URL: https://issues.apache.org/jira/browse/LUCENE-3069
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/search
Affects Versions: 4.0-ALPHA
Reporter: Simon Willnauer
Assignee: Han Jiang
  Labels: gsoc2013
 Fix For: 4.4


 FST based TermDictionary has been a great improvement yet it still uses a 
 delta codec file for scanning to terms. Some environments have enough memory 
 available to keep the entire FST based term dict in memory. We should add a 
 TermDictionary implementation that encodes all needed information for each 
 term into the FST (custom fst.Output) and builds a FST from the entire term 
 not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5006) CREATESHARD , DELETESHARD commands for 'implicit' shards

2013-07-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-5006:


 Summary: CREATESHARD , DELETESHARD commands for 'implicit' shards
 Key: SOLR-5006
 URL: https://issues.apache.org/jira/browse/SOLR-5006
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul


Custom sharding requires a CREATESHARD/DELETESHARD commands

It may not be applicable to hash based sharding 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-04 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5007:
-

 Summary: TestRecoveryHdfs seems to be leaking a thread 
occasionally that ends up failing a completely different test.
 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 623 - Failure!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/623/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.AliasIntegrationTest.testDistribSearch

Error Message:
Server at http://127.0.0.1:49164 returned non ok status:500, message:Server 
Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at 
http://127.0.0.1:49164 returned non ok status:500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([B5FBA37A33D63575:341D2D6244895549]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.cloud.AliasIntegrationTest.deleteAlias(AliasIntegrationTest.java:253)
at 
org.apache.solr.cloud.AliasIntegrationTest.doTest(AliasIntegrationTest.java:226)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 

[jira] [Updated] (SOLR-4982) Creating a core while referencing system properties looks like it loses files.

2013-07-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4982:
-

Attachment: SOLR-4982.patch

Patch with test case and fixes. All other tests pass.

I'll check this in later today, I want to do a few more tests. Unless there are 
objections.

 Creating a core while referencing system properties looks like it loses files.
 --

 Key: SOLR-4982
 URL: https://issues.apache.org/jira/browse/SOLR-4982
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4982.patch


 If you use the core admin handler to create core and reference system 
 properties and index files without restarting Solr, your files are indexed to 
 the wrong place.
 Say for instance I define a sys prop EOE=/Users/Erick/tmp and create a core 
 with this request
 localhost:8983/solr/admin/cores?action=CREATEname=coreZinstanceDir=coreZdataDir=%24%7BEOE%7D
 where %24%7BEOE%7D is really ${EOE} after URL escaping. What gets preserved 
 in solr.xml is correct, dataDir is set to ${EOE}. And if I restart Solr, then 
 index documents, they wind up in /Users/Erick/tmp. This is as it should be.
 HOWEVER, if rather than immediately restart Solr I index some documents to 
 CoreZ, they go in solr_home/CoreZ/${EOE}. The literal path is ${EOE}, 
 dollar sign, curly braces and all.
 How important is this to fix for 4.4?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700102#comment-13700102
 ] 

Dawid Weiss commented on SOLR-5007:
---

Hmm... within the same suite? If it's a different suite then this shouldn't be 
possible. Let me know, Mark.

 TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
 failing a completely different test.
 

 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
When was this dependency introduced? Is it really needed?

The smoketesting has been failing for days because of this.

On Thu, Jul 4, 2013 at 1:28 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

  [exec] raise RuntimeError('%s contains sheisty class %s' %
  (desc, name2))
  [exec] RuntimeError: JAR file
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/dist/test-framework/lib/jersey-core-1.16.jar
 contains sheisty class javax/ws/rs/ApplicationPath.class



[jira] [Created] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5092:


 Summary: join: don't expect all filters to be FixedBitSet instances
 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


The join module throws exceptions when the parents filter isn't a FixedBitSet. 
The reason is that the join module relies on prevSetBit to find the first child 
document given a parent ID.

As suggested by Paul Elschot on LUCENE-5081, we could fix it by exposing 
methods in the iterators to iterate backwards. When the join modules gets an 
iterator which isn't able to iterate backwards, it would just need to dump its 
content into another DocIdSet that supports backward iteration, FixedBitSet for 
example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5092:
-

Description: 
The join module throws exceptions when the parents filter isn't a FixedBitSet. 
The reason is that the join module relies on prevSetBit to find the first child 
document given a parent ID.

As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
exposing methods in the iterators to iterate backwards. When the join modules 
gets an iterator which isn't able to iterate backwards, it would just need to 
dump its content into another DocIdSet that supports backward iteration, 
FixedBitSet for example.

  was:
The join module throws exceptions when the parents filter isn't a FixedBitSet. 
The reason is that the join module relies on prevSetBit to find the first child 
document given a parent ID.

As suggested by Paul Elschot on LUCENE-5081, we could fix it by exposing 
methods in the iterators to iterate backwards. When the join modules gets an 
iterator which isn't able to iterate backwards, it would just need to dump its 
content into another DocIdSet that supports backward iteration, FixedBitSet for 
example.


 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5081) Compress doc ID sets

2013-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700111#comment-13700111
 ] 

Adrien Grand commented on LUCENE-5081:
--

bq. The use case in BlockJoinQuery has nothing to do with filters, it needs a 
special implementation to seek backwards
bq. For backward iteration I think it would be good to extend DocIdSetIterator 
with backwards iterating method(s), possibly in a subclass, and use an 
implementation of that in the block joins.

I opened LUCENE-5092.

 Compress doc ID sets
 

 Key: LUCENE-5081
 URL: https://issues.apache.org/jira/browse/LUCENE-5081
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5081.patch


 Our filters use bit sets a lot to store document IDs. However, it is likely 
 that most of them are sparse hence easily compressible. Having efficient 
 compressed sets would allow for caching more data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Uwe Schindler
Caused by Slowdoop commit of Mark Miller! This classes are used by Java 
Enterprise Edition, should never be used on a Client VM (my opinion). If 
somebody wants to run Slowdoop, use an Enterprise JDK.

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Robert Muir [mailto:rcm...@gmail.com] 
Sent: Thursday, July 04, 2013 4:51 PM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still 
Failing

 

When was this dependency introduced? Is it really needed?

The smoketesting has been failing for days because of this.

On Thu, Jul 4, 2013 at 1:28 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 [exec] raise RuntimeError('%s contains sheisty class %s' %  (desc, 
name2))
 [exec] RuntimeError: JAR file 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/dist/test-framework/lib/jersey-core-1.16.jar
 contains sheisty class javax/ws/rs/ApplicationPath.class



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700114#comment-13700114
 ] 

Uwe Schindler commented on LUCENE-5092:
---

There are 2 possibilities:
- Let the iterator implement an additional interface that exposes prev() or how 
we call that method. The code when then use instanceof to check if backwards is 
supported on the iterator
- Do it similar to random access bits() on the DocIdSet. In that case a 
consumer could ask the DocIdSet for a backwardsIterator(), which returns null 
if not existent

I would prefer the first possibility, especially if you need to go both 
backwards and forwards on the same iterator instance.

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700114#comment-13700114
 ] 

Uwe Schindler edited comment on LUCENE-5092 at 7/4/13 3:00 PM:
---

There are 2 possibilities:
- Let the iterator implement an additional interface that exposes prev() or how 
we call that method. The code when then use instanceof to check if backwards is 
supported on the iterator
- Do it similar to random access bits() on the DocIdSet. In that case a 
consumer could ask the DocIdSet for a backwardsIterator(), which returns null 
if not existent

We should never add an additional method to DocIdSetIterator, because then we 
would also have Scorers or DocsEnum optionally supporting going backwards! So 
please use an interface as marker + method definition!!!

I would prefer the first possibility, especially if you need to go both 
backwards and forwards on the same iterator instance.

  was (Author: thetaphi):
There are 2 possibilities:
- Let the iterator implement an additional interface that exposes prev() or how 
we call that method. The code when then use instanceof to check if backwards is 
supported on the iterator
- Do it similar to random access bits() on the DocIdSet. In that case a 
consumer could ask the DocIdSet for a backwardsIterator(), which returns null 
if not existent

I would prefer the first possibility, especially if you need to go both 
backwards and forwards on the same iterator instance.
  
 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700117#comment-13700117
 ] 

Mark Miller commented on SOLR-5007:
---

An example of what I am seeing: 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6421/

TestRandomDVFaceting fails with:

{noformat}

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.TestRandomDVFaceting: 1) 
Thread[id=4381, name=IPC Parameter Sending Thread #2, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at 
{noformat}

I assume it's due to TestRecoveryHdfs because of group=TGRP-TestRecoveryHdfs

 TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
 failing a completely different test.
 

 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
I'm confused why test-framework depends upon the jersey-core, hadoop-tests,
and jetty 6. Is there really anything in test-framework that depends on
these (I looked, I see nothing).

The reason I ask is because if test-framework doesnt need this (only
*concrete* tests), we could solve that with an ivy configuration for
'tests' instead. In my opinion this would be better: we wouldn't need to
ship it in our distribution at all (e.g. tests dont work from the binary
distribution anyway).

On Thu, Jul 4, 2013 at 10:55 AM, Uwe Schindler u...@thetaphi.de wrote:

 Caused by Slowdoop commit of Mark Miller! This classes are used by Java
 Enterprise Edition, should never be used on a Client VM (my opinion). If
 somebody wants to run Slowdoop, use an Enterprise JDK.

 ** **

 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de

 ** **

 *From:* Robert Muir [mailto:rcm...@gmail.com]
 *Sent:* Thursday, July 04, 2013 4:51 PM
 *To:* dev@lucene.apache.org
 *Subject:* Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 -
 Still Failing

 ** **

 When was this dependency introduced? Is it really needed?

 The smoketesting has been failing for days because of this.

 On Thu, Jul 4, 2013 at 1:28 AM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:

  [exec] raise RuntimeError('%s contains sheisty class %s' %
  (desc, name2))
  [exec] RuntimeError: JAR file
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/dist/test-framework/lib/jersey-core-1.16.jar
 contains sheisty class javax/ws/rs/ApplicationPath.class



RE: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Uwe Schindler
The additional problem now: If we expose this JAR file as part of Solr, it is 
incompatible to Java Enterprise! A classpath clash would occur leading to 
unexpected behavior in maybe other modules that expect the original (and maybe 
newer/older implementation as part of the EnterpiseJDK).

 

Like a WAR is not allowed to contain a servlet.jar (and the webapp container 
hides it from the WebappClassloader), this should also not be installed. The 
user should install it manually!

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Uwe Schindler [mailto:u...@thetaphi.de] 
Sent: Thursday, July 04, 2013 4:55 PM
To: dev@lucene.apache.org
Subject: RE: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still 
Failing

 

Caused by Slowdoop commit of Mark Miller! This classes are used by Java 
Enterprise Edition, should never be used on a Client VM (my opinion). If 
somebody wants to run Slowdoop, use an Enterprise JDK.

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Robert Muir [mailto:rcm...@gmail.com] 
Sent: Thursday, July 04, 2013 4:51 PM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still 
Failing

 

When was this dependency introduced? Is it really needed?

The smoketesting has been failing for days because of this.

On Thu, Jul 4, 2013 at 1:28 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 [exec] raise RuntimeError('%s contains sheisty class %s' %  (desc, 
name2))
 [exec] RuntimeError: JAR file 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/dist/test-framework/lib/jersey-core-1.16.jar
 contains sheisty class javax/ws/rs/ApplicationPath.class



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Mark Miller

On Jul 4, 2013, at 11:01 AM, Robert Muir rcm...@gmail.com wrote:

 I'm confused why test-framework depends upon the jersey-core, hadoop-tests, 
 and jetty 6. Is there really anything in test-framework that depends on these 
 (I looked, I see nothing).
 
 The reason I ask is because if test-framework doesnt need this (only 
 *concrete* tests), we could solve that with an ivy configuration for 'tests' 
 instead. In my opinion this would be better: we wouldn't need to ship it in 
 our distribution at all (e.g. tests dont work from the binary distribution 
 anyway).

The main reason they are in test-framework is because we don't want to ship any 
of it. I don't know the build well enough to say anything for sure, but it 
seems it could be part of another module that is not shipped.

- Mark

 
 On Thu, Jul 4, 2013 at 10:55 AM, Uwe Schindler u...@thetaphi.de wrote:
 Caused by Slowdoop commit of Mark Miller! This classes are used by Java 
 Enterprise Edition, should never be used on a Client VM (my opinion). If 
 somebody wants to run Slowdoop, use an Enterprise JDK.
 
  
 
 -
 
 Uwe Schindler
 
 H.-H.-Meier-Allee 63, D-28213 Bremen
 
 http://www.thetaphi.de
 
 eMail: u...@thetaphi.de
 
  
 
 From: Robert Muir [mailto:rcm...@gmail.com] 
 Sent: Thursday, July 04, 2013 4:51 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still 
 Failing
 
  
 
 When was this dependency introduced? Is it really needed?
 
 The smoketesting has been failing for days because of this.
 
 On Thu, Jul 4, 2013 at 1:28 AM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:
 
  [exec] raise RuntimeError('%s contains sheisty class %s' %  (desc, 
 name2))
  [exec] RuntimeError: JAR file 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/dist/test-framework/lib/jersey-core-1.16.jar
  contains sheisty class javax/ws/rs/ApplicationPath.class
 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Mark Miller

On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:

  we could solve that with an ivy configuration for 'tests' instead

Or perhaps solved by that? I don't really know what you are suggesting though - 
ivy is half as mysterious as maven to me.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Mark Miller

On Jul 4, 2013, at 11:02 AM, Uwe Schindler u...@thetaphi.de wrote:

 If we expose this JAR file as part of Solr,

It's not shipped as part of Solr - it's a test dependency. Take a look before 
you go nuts.

- Mark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk6) - Build # 6349 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6349/
Java: 32bit/ibm-j9-jdk6 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
Server at http://127.0.0.1:58311/onenodecollectioncore returned non ok 
status:404, message:Can not find: /onenodecollectioncore/update

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at 
http://127.0.0.1:58311/onenodecollectioncore returned non ok status:404, 
message:Can not find: /onenodecollectioncore/update
at 
__randomizedtesting.SeedInfo.seed([6E01C3C92C57FAB:87069224E59A1F97]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.testNodeWithoutCollectionForwarding(BasicDistributedZk2Test.java:196)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
On Thu, Jul 4, 2013 at 11:09 AM, Mark Miller markrmil...@gmail.com wrote:


 On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:

   we could solve that with an ivy configuration for 'tests' instead

 Or perhaps solved by that? I don't really know what you are suggesting
 though - ivy is half as mysterious as maven to me.


Basically you have the ability to say that src/java depends on libfoo.jar
and libbar.jar, but that src/test also needs libbaz.jar (only for running
those tests).

It might add some complexity, but then the dependencies would be more
correct I think, because test-framework doesnt really have any classes
that need these dependencies (from what I see).


RE: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Uwe Schindler
Hi,

I have no problem with it i fit is *not* part oft he WAR or the example 
folder's lib! If it would be it would be a bug!

In general we should split all JARs needed for compilation and those for 
test runtime and production runtime by 3 different configurations. The test 
runtime is only added to classpath when tests are ran. The production env is 
used for building the webapp/example app. The compile classpath only contains 
direct dependencies, no transient ones.

See e.g. forbidden-apis as an example. It splits those classpaths completely 
(and uses ivy:cachepath for classpath / ivy:cachefileset for packaging zips).

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: Thursday, July 04, 2013 5:09 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still
 Failing
 
 
 On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:
 
   we could solve that with an ivy configuration for 'tests' instead
 
 Or perhaps solved by that? I don't really know what you are suggesting
 though - ivy is half as mysterious as maven to me.
 
 - Mark
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Mark Miller

On Jul 4, 2013, at 11:12 AM, Robert Muir rcm...@gmail.com wrote:

 
 
 On Thu, Jul 4, 2013 at 11:09 AM, Mark Miller markrmil...@gmail.com wrote:
 
 On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:
 
   we could solve that with an ivy configuration for 'tests' instead
 
 Or perhaps solved by that? I don't really know what you are suggesting though 
 - ivy is half as mysterious as maven to me.
 
 
 Basically you have the ability to say that src/java depends on libfoo.jar and 
 libbar.jar, but that src/test also needs libbaz.jar (only for running those 
 tests).
 
 It might add some complexity, but then the dependencies would be more 
 correct I think, because test-framework doesnt really have any classes that 
 need these dependencies (from what I see).

Yeah, it seems like that should be doable then. The tests that use those 
dependencies are all in solr/core - like I said, the reason they went into 
test-framework was to avoid having them go out as part of Solr - not because 
classes there use them.

- Mark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Indexing file with security problem

2013-07-04 Thread Sanne Grinovero
To be honest I am not familiar with ManifoldCF, so I won't say if
Hibernate Search is better or not, but it would definitely not be too
hard with Hibernate Search:

1) You annotate with @Indexed the entity referring to your PostgreSQL
table containing the metadata; with @TikaBridge you point it to the
external resource containing the document.

Returning database ids is the default behaviour.

http://docs.jboss.org/hibernate/search/4.3/reference/en-US/html_single/#d0e4244

2) Is a bit more complex but I don't think any more complex than what
it would be with other technologies: you should encode some
information in the index, then define a parametric filter on that.

http://docs.jboss.org/hibernate/search/4.3/reference/en-US/html_single/#query-filter

3) Not sure, sorry. But the automatic indexing triggers happen as soon
as you store the metadata, so maybe that is good enough?

Looks interesting!

Sanne - Hibernate Search team


On 27 June 2013 03:14, Otis Gospodnetic otis.gospodne...@gmail.com wrote:
 Hi,

 I would start from ManifoldCF - it may save you some work.

 Otis
 Solr  ElasticSearch Support
 http://sematext.com/

 On Jun 26, 2013 5:01 PM, lukasw lukas...@gmail.com wrote:

 Hello

 I'll try to briefly describe my problem and task.
 My name is Lukas and i am Java developer , my task is to create search
 engine for different types of file (only text file types) pdf, word, odf,
 xml but not html.
 I have got little experience with lucene about year ago i wrote simple
 full
 text search using lucene and hibernate search. That was simple project.
 But
 now i have got very difficult task with searching.
 We are using java 1.7 and glassfish 3 and i have to concentrate only
 server
 side approach not client ui. Ther is my three major problem :

 1) All files is stored on webdav server, but information about file name ,
 id file typ etc are stored into database (postgresql) so when i creating
 index i need to use both information. As a result of query i need only
 return file id from database. Summary content of file is stored in server
 but information about file is stored in database so we must retrieve both.

 2) Secondary problem it that  each file has a level of secrecy. But major
 problem is that this level is calculated dynamically. When calculating
 level
 of security for file we considering several properties. The static
 properties is files location, the folder in which the file is, but also
 dynamic  information  user profiles user roles and departments . So when
 user Maggie is logged she can search only files test.pdf , test2.doc
 etc but if user Stev is logged he have got different profiles such a
 Maggie so he can only search some phase in file broken.pdf,
 mybook.odt.
 test2.doc etc . . I think that when for example user search phase
 lucene +solr we search in all indexed documents and after that filtered
 result. But i think that solution is  is not very efficient. What if
 results
 count 100 files , so what next we filtered step by step each files  ? But
 i
 do not see any other solution. Maybe you can help me and lucene or solr
 have
 got mechanism to help.

 3) Last problem is that some files are encrypted. So that files must be
 indexed only once before encryption ! But i think that if we indexed
 secure
 files so we get security issue. Because all word from that file is
 tokenized.
 I have not got any idea haw to secure lucene documents and index datastore
 ?
 its possible ...


 Also i have got question that i need to use Solr for my serarch engine or
 using only lucene and write own search engine ? So as you can see i have
 not
 got problem with indexing , serching but with security files and files
 secured levels.

 Thanks for any hints and time you spend for me.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Indexing-file-with-security-problem-tp4073394.html
 Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
On Thu, Jul 4, 2013 at 11:14 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 I have no problem with it i fit is *not* part oft he WAR or the example
 folder's lib! If it would be it would be a bug!

 In general we should split all JARs needed for compilation and those for
 test runtime and production runtime by 3 different configurations. The
 test runtime is only added to classpath when tests are ran. The production
 env is used for building the webapp/example app. The compile classpath only
 contains direct dependencies, no transient ones.

 See e.g. forbidden-apis as an example. It splits those classpaths
 completely (and uses ivy:cachepath for classpath / ivy:cachefileset for
 packaging zips).


And this is also much easier to implement if you are using
ivy:cachepath/cachefileset. As long as we have lib/ directories, its harder
because then we would have to add additional ones and make things more
complicated.

On the other hand, the cachepath/fileset is faster and does still work with
IDEs (at least with intellij and eclipse IDEs it seems, but you need ivyDE).


[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700141#comment-13700141
 ] 

Adrien Grand commented on LUCENE-5092:
--

I think the methods to go forward and backward should be on the same instance 
so I don't really like the 2nd option. The only issue I see with adding a 
marker interface is that it would make filtering harder to implement, eg. 
FilteredDocIdSet would need to check whether the underlying iterator supports 
backwards iteration to know whether to implement {{BackwardDocIdSetIterator}} 
or not.

Even if we have a marker interface, Scorers will be able to implement this 
marker interface, so how is it different from adding an optional method to 
DocIdSetIterator (for example I was thinking about two new methods on 
DocIdSetIterator: {{canIterateBackwards()}} and {{prevDoc()}}, 
{{canIterateBackwards()}} returning true meaning that {{prevDoc()}} can be used 
safely).

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700147#comment-13700147
 ] 

Robert Muir commented on LUCENE-5092:
-

I dont think we should add backwards iteration to DocIDSetIterator etc. This is 
not really how search engines work, its just an oddity of join.

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-04 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700152#comment-13700152
 ] 

Paul Elschot commented on LUCENE-5084:
--

Ok, I'll post a new patch that
* is in package oal.util.packed,
* uses numberOfLeadingZeros instead of a loop,
* has a correct getCost, and
* has a static method sufficientlySmallerThanBitSet, deliberately imprecise...

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700154#comment-13700154
 ] 

Uwe Schindler commented on LUCENE-5092:
---

-1 to add backwards iteration as separate instance or as part of the default 
DocIdSetIterator interface.

Just define an interface with the backwards methods and let all classes 
implement it. Then you would only have to check one time for the interface and 
cast to the interface.

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
I will exclude this file from the smoke tester then. I just have to wait
until it succeeds locally because my python is not very good.

On Thu, Jul 4, 2013 at 11:14 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 I have no problem with it i fit is *not* part oft he WAR or the example
 folder's lib! If it would be it would be a bug!

 In general we should split all JARs needed for compilation and those for
 test runtime and production runtime by 3 different configurations. The
 test runtime is only added to classpath when tests are ran. The production
 env is used for building the webapp/example app. The compile classpath only
 contains direct dependencies, no transient ones.

 See e.g. forbidden-apis as an example. It splits those classpaths
 completely (and uses ivy:cachepath for classpath / ivy:cachefileset for
 packaging zips).

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


  -Original Message-
  From: Mark Miller [mailto:markrmil...@gmail.com]
  Sent: Thursday, July 04, 2013 5:09 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 -
 Still
  Failing
 
 
  On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:
 
we could solve that with an ivy configuration for 'tests' instead
 
  Or perhaps solved by that? I don't really know what you are suggesting
  though - ivy is half as mysterious as maven to me.
 
  - Mark
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
  commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Comment Edited] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700154#comment-13700154
 ] 

Uwe Schindler edited comment on LUCENE-5092 at 7/4/13 3:48 PM:
---

-1 to add backwards iteration as separate instance or as part of the default 
DocIdSetIterator interface.

Just define an interface with the backwards methods and let all classes that 
want to provide it implement it. Then you would only have to check one time for 
the interface and cast to the interface.

  was (Author: thetaphi):
-1 to add backwards iteration as separate instance or as part of the 
default DocIdSetIterator interface.

Just define an interface with the backwards methods and let all classes 
implement it. Then you would only have to check one time for the interface and 
cast to the interface.
  
 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700165#comment-13700165
 ] 

Uwe Schindler commented on LUCENE-5092:
---

In general I am not happy with the backwards iteration at all. I have the 
feeling that this is a bug of block join. In my opinion the order of child / 
parent documents should be reversed, so the search for the parent (or child 
dont know) could go forward only.

This would make existing indexes incompatible (if child/parent documents are 
reversed), but we could add a IndexSorter based reorder machanism.

[~mikemccand]: Any inside knowledge about this?

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #377: POMs out of sync

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/377/

2 tests failed.
FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=7523, name=recoveryCmdExecutor-3868-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=7523, name=recoveryCmdExecutor-3868-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([130DFC8B65F28560]:0)


FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=7523, name=recoveryCmdExecutor-3868-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at 

[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700173#comment-13700173
 ] 

Jack Krupansky commented on SOLR-4998:
--

bq. As Mark says, good luck

+1

Shard is the proper term for Solr users to be using. If you hear somebody say 
slice, simply remind them that they should be saying shard and otherwise 
presume that they do mean shard.

The SolrCloud wiki glossary is the current bible on the topic, coupled with 
Yonik's more concise summary as collection==Listshard Shard==Listreplica.

We went through a previous round of terminology debates a year ago (or whenever 
it was) and Yonik updated the wiki back in January. I think it is fairly stable 
now.

The real challenge is simply to educate and otherwise cajole people to use the 
proper terminology. I'll do my part - I'll be starting to write the concepts 
and glossary sections of my coverage of SolrCloud for my book in the coming 
weeks.

I think we should follow accepted principles of interface design, and be 
careful not to blur or cross the line between interface - what the outside 
world sees - and implementation - how the interface is implemented internally.

So, at this point, I don't see any urgency to change the external definition of 
the SolrCloud interface - shard - yes, slice - no.

But if the main concern is to recast the implementation of SolrCloud as Shard 
rather than Slice, at least make it clear that that is the actual purpose of 
this Jira.

It might be useful to check any logging messages to verify that they refer to 
shard rather than slice.


 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 607 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/607/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.SolrInfoMBeanTest

Error Message:
PermGen space

Stack Trace:
java.lang.OutOfMemoryError: PermGen space
at __randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B]:0)
at __randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B]:0)
at __randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B]:0)
at 
__randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B:AF91E26B8EF29B71]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.OverseerCollectionProcessorTest

Error Message:
PermGen space

Stack Trace:
java.lang.OutOfMemoryError: PermGen space
at __randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B]:0)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:164)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.clinit(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.cloud.OverseerCollectionProcessorTest.setUpOnce(OverseerCollectionProcessorTest.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:677)


REGRESSION:  org.apache.solr.search.TestFoldingMultitermQuery.testLowerTokenizer

Error Message:
PermGen space

Stack Trace:
java.lang.OutOfMemoryError: PermGen space
at 
__randomizedtesting.SeedInfo.seed([81B34E78D4E69D6B:717B40FA8D8D2CF]:0)
at sun.misc.Unsafe.defineClass(Native Method)
at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:45)
at 
sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)
at 
sun.reflect.MethodAccessorGenerator.generateConstructor(MethodAccessorGenerator.java:76)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:30)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.reflect.Proxy.newInstance(Proxy.java:715)
at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:706)
at 
sun.reflect.annotation.AnnotationParser.annotationForMap(AnnotationParser.java:239)
at 
sun.reflect.annotation.AnnotationParser.parseAnnotation(AnnotationParser.java:229)
at 
sun.reflect.annotation.AnnotationParser.parseAnnotations2(AnnotationParser.java:69)
at 
sun.reflect.annotation.AnnotationParser.parseAnnotations(AnnotationParser.java:52)
at java.lang.reflect.Method.declaredAnnotations(Method.java:693)
at 

Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 - Still Failing

2013-07-04 Thread Robert Muir
I committed a fix, but there are more problems with the smoke-checking
(problems with NOTICE.txt). Its just now the checker makes it further along.

I can look at it tomorrow maybe, its way past beer time.

On Thu, Jul 4, 2013 at 11:46 AM, Robert Muir rcm...@gmail.com wrote:

 I will exclude this file from the smoke tester then. I just have to wait
 until it succeeds locally because my python is not very good.


 On Thu, Jul 4, 2013 at 11:14 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 I have no problem with it i fit is *not* part oft he WAR or the example
 folder's lib! If it would be it would be a bug!

 In general we should split all JARs needed for compilation and those
 for test runtime and production runtime by 3 different configurations.
 The test runtime is only added to classpath when tests are ran. The
 production env is used for building the webapp/example app. The compile
 classpath only contains direct dependencies, no transient ones.

 See e.g. forbidden-apis as an example. It splits those classpaths
 completely (and uses ivy:cachepath for classpath / ivy:cachefileset for
 packaging zips).

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


  -Original Message-
  From: Mark Miller [mailto:markrmil...@gmail.com]
  Sent: Thursday, July 04, 2013 5:09 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 96 -
 Still
  Failing
 
 
  On Jul 4, 2013, at 11:06 AM, Mark Miller markrmil...@gmail.com wrote:
 
we could solve that with an ivy configuration for 'tests' instead
 
  Or perhaps solved by that? I don't really know what you are suggesting
  though - ivy is half as mysterious as maven to me.
 
  - Mark
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
 additional
  commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700202#comment-13700202
 ] 

Adrien Grand commented on LUCENE-5092:
--

bq. I dont think we should add backwards iteration to DocIDSetIterator etc. 
This is not really how search engines work, its just an oddity of join.

In that case, a simple fix would just be to dump the content into a FixedBitSet 
every time we get something else.

bq. In general I am not happy with the backwards iteration at all.

Since the actual problem looks like we require random-access on top of an API 
which only supports random-access optionally, this makes me wonder whether we 
could support the same functionality on top of NumericDocValues instead of 
filters?

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1759 - Still Failing

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1759/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=514, 
name=recoveryCmdExecutor-81-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=514, name=recoveryCmdExecutor-81-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([255F4C624BABF64C]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=514, name=recoveryCmdExecutor-81-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) 

[jira] [Updated] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4916:
--

Attachment: SOLR-4916-ivy.patch

I don't really know ivy, but here is a patch that moves dfsminicluster 
dependencies from test-framework to core. I'm not really sure if the private 
conf stuff is working or not - I don't think we have another module that 
depends on core to check with...

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700218#comment-13700218
 ] 

Mark Miller commented on SOLR-4916:
---

Duh, of course the contribs are also ivy modules that depends on core...I'll 
mess around and see if I can get this working nicely...

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700231#comment-13700231
 ] 

Robert Muir commented on LUCENE-5092:
-

{quote}
Since the actual problem looks like we require random-access on top of an API 
which only supports random-access optionally, this makes me wonder whether we 
could support the same functionality on top of NumericDocValues instead of 
filters?
{quote}

Cool idea :) I've always been a little frustrated that parent/child docs aren't 
recorded in the index: so tools like splitters, sorters, or 
sorting-merge-policies cant avoid splitting nested documents in half by default.

Would be even better if numericDV was updatable (copy-on-write generation in 
the commit like deleted docs/old setNorm), and deleted docs were then 
implemented as DV... I guess if we go this route we should probably think if 
its worth being numericDV or a specialized bitsetDV type. I hate flooding the 
API with unnecessary types though...


 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_25) - Build # 3001 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3001/
Java: 64bit/jdk1.7.0_25 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=6923, 
name=TEST-TestCoreContainer.testSharedLib-seed#[A7931A05E2162ECD], 
state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=6923, 
name=TEST-TestCoreContainer.testSharedLib-seed#[A7931A05E2162ECD], 
state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below.
at __randomizedtesting.SeedInfo.seed([A7931A05E2162ECD]:0)
at java.lang.Thread.getStackTrace(Thread.java:1568)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545)
at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131)
at 
org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700259#comment-13700259
 ] 

Uwe Schindler commented on SOLR-4916:
-

I don't think this solution is very nice unless we have the conf really 
working. Currently all JARs are copied to lib folder ignoring the conf=... 
attribute and we have to filter them ourselves (see your patch where you 
exclude from WAR file).

In this case (without ivy:cachepatch/ivy:cachefileset) I would prefer the 
current solution.

IMHO, the problem with the release smoker is more the fact that it checks too 
much. The smoke tester should only deny javax classes in official lib folders 
and JAR files, not in test dependencies. I have really no problem with having 
the test dependencies in Solr's test-framework, of course not in Lucene's 
test-framework. In other lib dir we also have transient non-compile time 
dependencies.

My best idea would be to add a second lib folder (test-framework/runtime-libs) 
that is not packed into the binary ZIP file distribution. It's easy to add: We 
can add a separate resolve with another target folder. In Maven it should also 
definitely not be listed as dependency for runtime, too!

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5084) EliasFanoDocIdSet

2013-07-04 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5084:
-

Attachment: LUCENE-5084.patch

As announced on 4 July 2013.

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch, LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700262#comment-13700262
 ] 

Mark Miller commented on SOLR-4916:
---

Well that's no help - even if all of this is tied up to the classpaths used, it 
doesn't seem to be a mechanism for shielding modules from each other AFAICT. I 
guess the main use case is for downstream projects to have the ability to 
filter out these dependencies and avoid pulling down the test time dependencies 
- but it seems we would care about that in the maven shadow build, not here - 
we don't publish based on the ivy files right?

In that case, it would seem we should simply do the same thing as with some of 
the other jars in core that are excluded from the webapp - exclude them in the 
build.xml and have the maven build treat them as part of a test configuration?

[~steve_rowe], does any of that make any sense?

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700275#comment-13700275
 ] 

Uwe Schindler commented on SOLR-4916:
-

I also don't want the files in the distribution ZIP, and currently they are 
listed in the distribution ZIP's test-framework folder (and this is why the 
smoke tester fails)! Because of that I proposed to have a separate lib folder 
that is never zipped (to WAR but also not to bin-tgz/bin-zip).

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5006) CREATESHARD , DELETESHARD commands for 'implicit' shards

2013-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700281#comment-13700281
 ] 

Shalin Shekhar Mangar commented on SOLR-5006:
-

The deleteShard part is taken care of by SOLR-4693. It allows you to delete 
shards that have a null range (i.e. created for custom sharding) as well as 
those whose state is inactive.

 CREATESHARD , DELETESHARD commands for 'implicit' shards
 

 Key: SOLR-5006
 URL: https://issues.apache.org/jira/browse/SOLR-5006
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul

 Custom sharding requires a CREATESHARD/DELETESHARD commands
 It may not be applicable to hash based sharding 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4995) Implementing a Server Capable of Propagating Requests

2013-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700297#comment-13700297
 ] 

Shalin Shekhar Mangar commented on SOLR-4995:
-

Typically this is achieved by modifying the request handler (or 
SearchComponent) to support distributed requests. See 
http://wiki.apache.org/solr/WritingDistributedSearchComponents

This patch just queries a node of each shard and adds all the responses into 
one response. This is not ideal because:
# It doesn't really combine stats in a useful way. A client would still have to 
go through each individual sub-response and add up the statistics
# It only helps SolrJ (Java) users. On the other hand a distributed 
handler/component is useful for clients of all languages

I don't think that this is the right approach.

 Implementing a Server Capable of Propagating Requests
 -

 Key: SOLR-4995
 URL: https://issues.apache.org/jira/browse/SOLR-4995
 Project: Solr
  Issue Type: New Feature
Reporter: Furkan KAMACI
 Attachments: SOLR-4995.patch


 Currently Solr servers are interacting with only one Solr node. There should 
 be an implementation that propagates requests into multiple Solr nodes. For 
 example when Solr is used as SolrCloud sending a LukeRequest should be made 
 to one node at each shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700300#comment-13700300
 ] 

ASF subversion and git services commented on SOLR-4916:
---

Commit 1499842 from [~markrmil...@gmail.com]
[ https://svn.apache.org/r1499842 ]

SOLR-4916: Fix NOTICE - take Solr entry out of Lucene section

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4112 - Still Failing

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4112/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2895, 
name=recoveryCmdExecutor-1885-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2895, name=recoveryCmdExecutor-1885-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([79CEA2DB77C512FB]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2895, name=recoveryCmdExecutor-1885-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at 

[jira] [Created] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Mark Miller (JIRA)
Mark Miller created LUCENE-5093:
---

 Summary: nightly-smoke should run some fail fast checks before 
doing the full smoke tester
 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4


If something like the NOTICES fail the smoke tester, it currently takes 22 
minutes to find out on my pretty fast machine. That means testing a fix also 
takes 22 minutes.

It would be nice if some of these types of checks happened right away on the 
src tree - we should also check the actual artifacts with the same check later 
- but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-5093:


Attachment: LUCENE-5093.patch

Don't laugh, I don't know python at all.

This adds one fail fast check for the testNotice check.

 nightly-smoke should run some fail fast checks before doing the full smoke 
 tester
 -

 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5093.patch


 If something like the NOTICES fail the smoke tester, it currently takes 22 
 minutes to find out on my pretty fast machine. That means testing a fix also 
 takes 22 minutes.
 It would be nice if some of these types of checks happened right away on the 
 src tree - we should also check the actual artifacts with the same check 
 later - but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700308#comment-13700308
 ] 

Robert Muir commented on LUCENE-5093:
-

awesome! i think a lot of these checks do run on the source tree thats in the 
release .tar.gz.

so if we can do what you did here, and make a lot of the fast ones accessible 
directly on the source tree, we could incorporate these kinda things even into 
every jenkins build (via precommit: rather than just the heavy-duty check that 
takes 30 minutes and runs once a day or something).

and it makes fixing things locally faster.

 nightly-smoke should run some fail fast checks before doing the full smoke 
 tester
 -

 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5093.patch


 If something like the NOTICES fail the smoke tester, it currently takes 22 
 minutes to find out on my pretty fast machine. That means testing a fix also 
 takes 22 minutes.
 It would be nice if some of these types of checks happened right away on the 
 src tree - we should also check the actual artifacts with the same check 
 later - but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700311#comment-13700311
 ] 

Uwe Schindler commented on LUCENE-5093:
---

I would prefer to have these checks in plain Ant or Ant-embedded Groovy logic. 
No need for python here... I can help with implementing those. 

 nightly-smoke should run some fail fast checks before doing the full smoke 
 tester
 -

 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5093.patch


 If something like the NOTICES fail the smoke tester, it currently takes 22 
 minutes to find out on my pretty fast machine. That means testing a fix also 
 takes 22 minutes.
 It would be nice if some of these types of checks happened right away on the 
 src tree - we should also check the actual artifacts with the same check 
 later - but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700312#comment-13700312
 ] 

Robert Muir commented on LUCENE-5093:
-

another check we could add in the future like this: the smokeTester checks some 
things about changes.html.

I've had to respin (30 minutes to see the fail, then fix, then 30 more minutes 
to make sure it passes) because of a leftout or extra : character in 
CHANGES.txt before. Really it would be nice if the documentation-lint that runs 
as part of precommit found that earlier...

 nightly-smoke should run some fail fast checks before doing the full smoke 
 tester
 -

 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5093.patch


 If something like the NOTICES fail the smoke tester, it currently takes 22 
 minutes to find out on my pretty fast machine. That means testing a fix also 
 takes 22 minutes.
 It would be nice if some of these types of checks happened right away on the 
 src tree - we should also check the actual artifacts with the same check 
 later - but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5093) nightly-smoke should run some fail fast checks before doing the full smoke tester

2013-07-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700317#comment-13700317
 ] 

Mark Miller commented on LUCENE-5093:
-

bq. I would prefer to have these checks in plain Ant or Ant-embedded Groovy 
logic.

I think it depends - unless we rewrite the whole smoke tester, I think it's 
better that this logic is in one place - the smoke tester script. Otherwise, 
over time, some checks might be added to the testNotice method of the smoke 
tester but not to the fast fail check in ant or groovy, and then it doesn't 
help much with fixing smoke tester fails. Any expansion of tests will have to 
be done in two places, in two non java languages (yuck and yuck).

 nightly-smoke should run some fail fast checks before doing the full smoke 
 tester
 -

 Key: LUCENE-5093
 URL: https://issues.apache.org/jira/browse/LUCENE-5093
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5093.patch


 If something like the NOTICES fail the smoke tester, it currently takes 22 
 minutes to find out on my pretty fast machine. That means testing a fix also 
 takes 22 minutes.
 It would be nice if some of these types of checks happened right away on the 
 src tree - we should also check the actual artifacts with the same check 
 later - but also have this fail fast path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700320#comment-13700320
 ] 

Mark Miller commented on SOLR-4916:
---

{quote}My best idea would be to add a second lib folder 
(test-framework/runtime-libs) that is not packed into the binary ZIP file 
distribution. It's easy to add: We can add a separate resolve with another 
target folder. In Maven it should also definitely not be listed as dependency 
for runtime, too!{quote}

I crossposted with this, so I had not read it yet. That's fine with me if 
[~rcmuir] is fine with it.

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Raise maxBooleanClauses??

2013-07-04 Thread Jack Krupansky
Anybody have any interest in pursuing SOLR-4586 - Increase default 
maxBooleanClauses?

Actually, I thought it was already done – I was checking to see when it was 
done, when I noticed it was still open.

https://issues.apache.org/jira/browse/SOLR-4586

Or...

LUCENE-4835 - Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE

https://issues.apache.org/jira/browse/LUCENE-4835

-- Jack Krupansky

[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700321#comment-13700321
 ] 

ASF subversion and git services commented on SOLR-4916:
---

Commit 1499847 from [~markrmil...@gmail.com]
[ https://svn.apache.org/r1499847 ]

SOLR-4916: Fix NOTICE - take Solr entry out of Lucene section

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, 
 SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4693) Create a collections API to delete/cleanup a shard

2013-07-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700325#comment-13700325
 ] 

Anshum Gupta commented on SOLR-4693:


[~shalinmangar]looks like you missed a few changes that you put up in your list 
above.
Renaming of Slice-Shard stuff primarily.

I'll fix those as a part of SOLR-4998.

 Create a collections API to delete/cleanup a shard
 --

 Key: SOLR-4693
 URL: https://issues.apache.org/jira/browse/SOLR-4693
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.4

 Attachments: SOLR-4693.patch, SOLR-4693.patch, SOLR-4693.patch, 
 SOLR-4693.patch, SOLR-4693.patch


 A collections API that unloads all replicas of a given shard and then removes 
 it from the cluster state.
 It will remove only those shards which are INACTIVE or have no range (created 
 for custom sharding)
 Among other places, this would be useful post the shard split call to manage 
 the parent/original shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4693) Create a collections API to delete/cleanup a shard

2013-07-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700325#comment-13700325
 ] 

Anshum Gupta edited comment on SOLR-4693 at 7/4/13 7:49 PM:


[~shalinmangar] looks like you missed a few changes that you put up in your 
list above.
Renaming of Slice-Shard stuff primarily.

I'll fix those as a part of SOLR-4998.

  was (Author: anshumg):
[~shalinmangar]looks like you missed a few changes that you put up in your 
list above.
Renaming of Slice-Shard stuff primarily.

I'll fix those as a part of SOLR-4998.
  
 Create a collections API to delete/cleanup a shard
 --

 Key: SOLR-4693
 URL: https://issues.apache.org/jira/browse/SOLR-4693
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.4

 Attachments: SOLR-4693.patch, SOLR-4693.patch, SOLR-4693.patch, 
 SOLR-4693.patch, SOLR-4693.patch


 A collections API that unloads all replicas of a given shard and then removes 
 it from the cluster state.
 It will remove only those shards which are INACTIVE or have no range (created 
 for custom sharding)
 Among other places, this would be useful post the shard split call to manage 
 the parent/original shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Raise maxBooleanClauses??

2013-07-04 Thread Shawn Heisey
On 7/4/2013 1:32 PM, Jack Krupansky wrote:
 Anybody have any interest in pursuing SOLR-4586 - Increase default
 maxBooleanClauses?
  
 Actually, I thought it was already done – I was checking to see when it
 was done, when I noticed it was still open.
  
 https://issues.apache.org/jira/browse/SOLR-4586
  
 Or...
  
 LUCENE-4835 - Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE
  
 https://issues.apache.org/jira/browse/LUCENE-4835


My original proposal faced some opposition.  I think Yonik is very much
in favor of pretty much obliterating the Solr limitation, regardless of
what happens with Lucene.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700329#comment-13700329
 ] 

Otis Gospodnetic commented on SOLR-4998:


I am not sure what naming conventionS Solr code is using.  I know most people 
are inconsistent and so the code is also often inconsistent.  Here we see this 
inconsistency leads to a lot of confusion.  I think it's great Anshum initiated 
this. My personal preference would be to:
* pick the terminology that makes sense and is easy to explain and understand
* adjust BOTH code and documentation to match that, even if it means renaming 
classes and variables, because it's only going to get harder to do that if it's 
not done now.

OK, here is another attempt:

# A Cluster has Collections
# A Collection is a logical index
# A Collection has as many Shards as numShards
# A Shard is a logical index subset
# There are as many physical instances of a given Shard as the Collection's 
replicationFactor
# These physical instances are called Replicas
# Each Replica contains a Core
# A Core is a single physical Lucene index
# One Replica in each Shard is labeled a Leader
# Any Replica can become a Leader through election if previous Leader goes away
# Each Shard has 1 or more Replicas with exactly 1 of those Replicas acting as 
the Leader

I think this is it, no?

Visually, by logical role:
||shard 1||shard 2||shard 3||
|leader 1.1|leader 2.1|leader 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

So we would say that the above Collection has:
* 3 Shards
* 5 Replicas
* in each Shard 1 Replica *acts as* a Leader

If we ignore roles then this same Collection has the following physical 
structure:

|replica 1.1|replica 2.1|replica 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

Yes/no?


 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700329#comment-13700329
 ] 

Otis Gospodnetic edited comment on SOLR-4998 at 7/4/13 8:27 PM:


I am not sure what naming conventionS Solr code is using.  I know most people 
are inconsistent and so code (in general, not referring specifically to Solr 
here) is also often inconsistent.  Here we see this inconsistency leads to a 
lot of confusion.  I think it's great Anshum initiated this. My personal 
preference would be to:
* pick the terminology that makes sense and is easy to explain and understand
* adjust BOTH code and documentation to match that, even if it means renaming 
classes and variables, because it's only going to get harder to do that if it's 
not done now.

OK, here is another attempt:

# A Cluster has Collections
# A Collection is a logical index
# A Collection has as many Shards as numShards
# A Shard is a logical index subset
# There are as many physical instances of a given Shard as the Collection's 
replicationFactor
# These physical instances are called Replicas
# Each Replica contains a Core
# A Core is a single physical Lucene index
# One Replica in each Shard is labeled a Leader
# Any Replica can become a Leader through election if previous Leader goes away
# Each Shard has 1 or more Replicas with exactly 1 of those Replicas acting as 
the Leader

I think this is it, no?

Visually, by logical role:
||shard 1||shard 2||shard 3||
|leader 1.1|leader 2.1|leader 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

So we would say that the above Collection has:
* 3 Shards
* 5 Replicas
* in each Shard 1 Replica *acts as* a Leader

If we ignore roles then this same Collection has the following physical 
structure:

|replica 1.1|replica 2.1|replica 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

Yes/no?


  was (Author: otis):
I am not sure what naming conventionS Solr code is using.  I know most 
people are inconsistent and so the code is also often inconsistent.  Here we 
see this inconsistency leads to a lot of confusion.  I think it's great Anshum 
initiated this. My personal preference would be to:
* pick the terminology that makes sense and is easy to explain and understand
* adjust BOTH code and documentation to match that, even if it means renaming 
classes and variables, because it's only going to get harder to do that if it's 
not done now.

OK, here is another attempt:

# A Cluster has Collections
# A Collection is a logical index
# A Collection has as many Shards as numShards
# A Shard is a logical index subset
# There are as many physical instances of a given Shard as the Collection's 
replicationFactor
# These physical instances are called Replicas
# Each Replica contains a Core
# A Core is a single physical Lucene index
# One Replica in each Shard is labeled a Leader
# Any Replica can become a Leader through election if previous Leader goes away
# Each Shard has 1 or more Replicas with exactly 1 of those Replicas acting as 
the Leader

I think this is it, no?

Visually, by logical role:
||shard 1||shard 2||shard 3||
|leader 1.1|leader 2.1|leader 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

So we would say that the above Collection has:
* 3 Shards
* 5 Replicas
* in each Shard 1 Replica *acts as* a Leader

If we ignore roles then this same Collection has the following physical 
structure:

|replica 1.1|replica 2.1|replica 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

Yes/no?

  
 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700331#comment-13700331
 ] 

Dawid Weiss commented on LUCENE-5086:
-

Yeah, I confirm the problem :) Instead of applying the patch mentioned by Shay 
I'd like to spend some time and see what Alex Shipilev did in his tool.

He's guessing the size of the alignment here:

https://github.com/shipilev/java-object-layout/blob/master/src/main/java/org/openjdk/tools/objectlayout/VMSupport.java#L240

but also reading it directly here (hotspot and jrockit separately):

https://github.com/shipilev/java-object-layout/blob/master/src/main/java/org/openjdk/tools/objectlayout/VMSupport.java#L138

I'll dig what he actually does and whether the results are in sync with what we 
have. He still calls getPlatformMBeanServer directly so there'll have to be a 
workaround for that anyway.





 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss

 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700329#comment-13700329
 ] 

Otis Gospodnetic edited comment on SOLR-4998 at 7/4/13 8:38 PM:


I am not sure what naming conventionS Solr code is using.  I know most people 
are inconsistent and so code (in general, not referring specifically to Solr 
here) is also often inconsistent.  Here we see this inconsistency leads to a 
lot of confusion.  I think it's great Anshum initiated this. My personal 
preference would be to:
* pick the terminology that makes sense and is easy to explain and understand
* adjust BOTH code and documentation to match that, even if it means renaming 
classes and variables, because it's only going to get harder to do that if it's 
not done now.

OK, here is another attempt:

# A Cluster has Collections
# A Collection is a logical index
# A Collection has as many Shards as numShards
# A Shard is a logical index subset
# There are as many physical instances of a given Shard as the Collection's 
replicationFactor
# These physical instances are called Replicas
# The number of Replicas in a Collection equals numShards * replicationFactor 
# Each Replica contains a Core
# A Core is a single physical Lucene index
# One Replica in each Shard is labeled a Leader
# Any Replica can become a Leader through election if previous Leader goes away
# Each Shard has 1 or more Replicas with exactly 1 of those Replicas acting as 
the Leader

I think this is it, no?

Visually, by logical role:
||shard 1||shard 2||shard 3||
|leader 1.1|leader 2.1|leader 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

So we would say that the above Collection has:
* 3 Shards
* 5 Replicas
* in each Shard 1 Replica *acts as* a Leader

If we ignore roles then this same Collection has the following physical 
structure:

|replica 1.1|replica 2.1|replica 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 1.3|replica 2.3|replica 3.3|
|replica 1.4|replica 2.4|replica 3.4|
|replica 1.5|replica 2.5|replica 3.5|

Yes/no?

So I agree, there is really no need for Slice here. I already forgot about 
that term.
Problems we'll have:
* People will refer to physical copies, those Replicas, as Shards.  When they 
say Shard they'll often refer to a specific Replica.  I know I always think 
of each cell in the above table as Shard, but that's not how we (should) use 
that term. Shards are just logical. Those cells are Replicas.
* We use Replica to a physical index, but also use it to describe a 
non-Leader role.  Confusing.  If there is a Leader, where are Followers?  Would 
introducing the term Follower help?  Then we could say/teach people the 
following:
** When you say Shard it just means the logical Collection subset. It's not 
physical at all.
** If you want to talk about physical indices in a Collection use the term 
Replica. They are all Replicas.
** If you want to refer to a Replica by its role, then you've got to say either 
Leader or Follower.  Because if you say Replica we won't know whether you are 
referring to the special Replica that acts as a Leader or all the other ones.

I think we'll need to correct this in any docs and will need to correct people 
on the ML until we get everyone in sync.  Any books or articles that have been 
written with different terminology will be wrong/out of date and will confuse 
people.

Yes/no?


  was (Author: otis):
I am not sure what naming conventionS Solr code is using.  I know most 
people are inconsistent and so code (in general, not referring specifically to 
Solr here) is also often inconsistent.  Here we see this inconsistency leads to 
a lot of confusion.  I think it's great Anshum initiated this. My personal 
preference would be to:
* pick the terminology that makes sense and is easy to explain and understand
* adjust BOTH code and documentation to match that, even if it means renaming 
classes and variables, because it's only going to get harder to do that if it's 
not done now.

OK, here is another attempt:

# A Cluster has Collections
# A Collection is a logical index
# A Collection has as many Shards as numShards
# A Shard is a logical index subset
# There are as many physical instances of a given Shard as the Collection's 
replicationFactor
# These physical instances are called Replicas
# Each Replica contains a Core
# A Core is a single physical Lucene index
# One Replica in each Shard is labeled a Leader
# Any Replica can become a Leader through election if previous Leader goes away
# Each Shard has 1 or more Replicas with exactly 1 of those Replicas acting as 
the Leader

I think this is it, no?

Visually, by logical role:
||shard 1||shard 2||shard 3||
|leader 1.1|leader 2.1|leader 3.1|
|replica 1.2|replica 2.2|replica 3.2|
|replica 

[jira] [Updated] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5086:


Attachment: LUCENE-5086.patch

Just so that it doesn't escape: this patch does not bring up the AWT icon in 
the dock for me. I checked on a Mac with the default java (1.6).

I will still look into Alex's estimation of the alignment property but if you 
want to commit this in then I can file another issue.

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss
 Attachments: LUCENE-5086.patch


 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4624) Compare Lucene memory estimator with terracota's and Alex Shipilev's object-layout

2013-07-04 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-4624:


Summary: Compare Lucene memory estimator with terracota's and Alex 
Shipilev's object-layout  (was: Compare Lucene memory estimator with 
terracota's)

 Compare Lucene memory estimator with terracota's and Alex Shipilev's 
 object-layout
 --

 Key: LUCENE-4624
 URL: https://issues.apache.org/jira/browse/LUCENE-4624
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor

 Alex Snaps informed me that there's a sizeof estimator in terracota --
 http://svn.terracotta.org/svn/ehcache/trunk/ehcache/ehcache-core/src/main/java/net/sf/ehcache/pool/sizeof/
 looks interesting, they have some VM-specific methods. Didn't look too deeply 
 though; if somebody has the time to check out the differences and maybe 
 compare the estimation differences it'd be nice.
 There is also another tool by Aleksey Shipilev. It looks very good to me 
 (Aleksey has deep knowledge of JVM internals).
 https://github.com/shipilev/java-object-layout/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700339#comment-13700339
 ] 

Uwe Schindler commented on LUCENE-5086:
---

+1

Looks like the code I started this afternoon. In Lucene trunk (Java 7) we can 
remove the first check. We should check the code with all kniwn jvms and check 
which part is the source of yhe info (maybe using some printlns).

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss
 Attachments: LUCENE-5086.patch


 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700342#comment-13700342
 ] 

Dawid Weiss commented on LUCENE-5086:
-

Yep, I'll add a table to the issue. Going to bed now but will test tomorrow.

 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss
 Attachments: LUCENE-5086.patch


 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700345#comment-13700345
 ] 

Dawid Weiss commented on LUCENE-5086:
-

Test the patch with:

* Windows
** jrockit
** hotspot 1.6
** hotspot 1.7
** j9
* Mac
** (/) default osx 1.6 [first clause does the job]
** openjdk 1.7?
* Linux
** jrockit
** hotspot 1.6
** hotspot 1.7
** j9
* BSD
** jrockit
** hotspot 1.6
** hotspot 1.7
** j9



 RamUsageEstimator causes AWT classes to be loaded by calling 
 ManagementFactory#getPlatformMBeanServer
 -

 Key: LUCENE-5086
 URL: https://issues.apache.org/jira/browse/LUCENE-5086
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shay Banon
Assignee: Dawid Weiss
 Attachments: LUCENE-5086.patch


 Yea, that type of day and that type of title :).
 Since the last update of Java 6 on OS X, I started to see an annoying icon 
 pop up at the doc whenever running elasticsearch. By default, all of our 
 scripts add headless AWT flag so people will probably not encounter it, but, 
 it was strange that I saw it when before I didn't.
 I started to dig around, and saw that when RamUsageEstimator was being 
 loaded, it was causing AWT classes to be loaded. Further investigation showed 
 that actually for some reason, calling 
 ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
 AWT classes to be loaded (at least on the mac, haven't tested on other 
 platforms yet). 
 There are several ways to try and solve it, for example, by identifying the 
 bug in the JVM itself, but I think that there should be a fix for it in 
 Lucene itself, specifically since there is no need to call 
 #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
 call...).
 Here is a simple call that will allow to get the hotspot mxbean without using 
 the #getPlatformMBeanServer method, and not causing it to be loaded and 
 loading all those nasty AWT classes:
 {code}
 Object getHotSpotMXBean() {
 try {
 // Java 6
 Class sunMF = Class.forName(sun.management.ManagementFactory);
 return sunMF.getMethod(getDiagnosticMXBean).invoke(null);
 } catch (Throwable t) {
 // ignore
 }
 // potentially Java 7
 try {
 return ManagementFactory.class.getMethod(getPlatformMXBean, 
 Class.class).invoke(null, 
 Class.forName(com.sun.management.HotSpotDiagnosticMXBean));
 } catch (Throwable t) {
 // ignore
 }
 return null;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4982) Creating a core while referencing system properties looks like it loses files.

2013-07-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4982:
-

Attachment: SOLR-4982.patch

Along the way, I wondered gee, what happens if we create a core in discovery 
mode? Well, we don't preserve properties passed on the URL. This patch 
preserves any parameters passed in on the admin URL, e.g. dataDir, config etc.

Oddly, you have to specify instanceDir even though it isn't a valid property 
for core.properties, else how do we let the user specify something not 
immediately below solrhome?

But my remaining problem is that I can't exit the test gracefully, searchers 
are left hanging and we get the partial stack trace below.

This usually means we did a CoreContainer.get and no corresponding close, but 
there aren't any such things. I think I even closed loading properties this 
time, although that would be unrelated I think.

When we create cores in old-style solr.xml land, there's no need to do anything 
special to exit the tests. So I'm hoping someone has an aha moment. Otherwise 
I need to dig into core creation in the discovery case and find out if it 
really is different.



java.lang.AssertionError: ERROR: SolrIndexSearcher opens=3 closes=1
at __randomizedtesting.SeedInfo.seed([445018872D608FD5]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:275)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)


 Creating a core while referencing system properties looks like it loses files.
 --

 Key: SOLR-4982
 URL: https://issues.apache.org/jira/browse/SOLR-4982
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4982.patch, SOLR-4982.patch


 If you use the core admin handler to create core and reference system 
 properties and index files without restarting Solr, your files are indexed to 
 the wrong place.
 Say for instance I define a sys prop EOE=/Users/Erick/tmp and create a core 
 with this request
 localhost:8983/solr/admin/cores?action=CREATEname=coreZinstanceDir=coreZdataDir=%24%7BEOE%7D
 where %24%7BEOE%7D is really ${EOE} after URL escaping. What gets preserved 
 in solr.xml is correct, dataDir is set to ${EOE}. And if I restart Solr, then 
 index documents, they wind up in /Users/Erick/tmp. This is as it should be.
 HOWEVER, if rather than immediately restart Solr I index some documents to 
 CoreZ, they go in solr_home/CoreZ/${EOE}. The literal path is ${EOE}, 
 dollar sign, curly braces and all.
 How important is this to fix for 4.4?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Persistence and created cores and discovery mode, need a bit of help

2013-07-04 Thread Erick Erickson
I'm working on SOLR-4982 which is about old-style persistence and came up
with a question. Can we create cores and have the core.properties written
correctly including parameters on the URL? Answer NO.

So I fixed that, but can't exit tests gracefully. Comments on the JIRA and
any new eyes gladly thanked...

Not asking people to spend lots of time on it, but if you know where I
should start looking it'd be a great help

Erick


[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-04 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700348#comment-13700348
 ] 

Dawid Weiss commented on SOLR-5007:
---

This:
{code}
Thread[id=4381, name=IPC Parameter Sending Thread #2, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs]
{code}

means the thread is indeed part of the thread group formed for 
TestRecoveryHdfs. There is a catch though -- the thread that leaked from this 
test couldn't exist back then (because it'd be marked as existing at the start 
of the test). My suspicion is that what happens is an IPC thread is created in 
some test and then this thread spawns more threads on-demand or something. So 
the thread group of that parent, persistent thread is inherited by the children 
who are then detected as rogue threads.

I cannot explain why that parent thread isn't causing a thread leak error -- 
it'd have to be in the set of ignored threads (but its children are not). If 
anything comes to your mind, shoot. I'll try to investigate tomorrow.


 TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
 failing a completely different test.
 

 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-04 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700351#comment-13700351
 ] 

Jack Krupansky commented on SOLR-4998:
--

For amusement, consider the ElasticSearch terminology:

http://www.elasticsearch.org/guide/reference/glossary/

(Hmmm... just to create trouble in the spirit of independence day, maybe I'll 
consider an appendix in my book which gives a terminology mapping for 
developers looking to upgrade from ElasticSearch to SolrCloud!)


 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 624 - Still Failing!

2013-07-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/624/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds

Error Message:
1: soft occured too fast: 1372968537612 + (500 * 1) != 1372968538079

Stack Trace:
java.lang.AssertionError: 1: soft occured too fast: 1372968537612 + (500 * 1) 
!= 1372968538079
at 
__randomizedtesting.SeedInfo.seed([7441AD2C784E8A74:2854031593CCCB0C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds(SoftAutoCommitTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:

[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 305 - Still Failing

2013-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/305/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3386, 
name=recoveryCmdExecutor-1047-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=3386, name=recoveryCmdExecutor-1047-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([EBDEE3F396FB644C]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3386, name=recoveryCmdExecutor-1047-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at 

  1   2   >