Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Leonardo Foderaro
Hi Toke,
I'm glad to know that.
That's exactly why I'm writing Alba: to lower as much as possible the
initial learning curve about Solr plugins architecture so you can do a
quick test and evaluate if a custom plugin is the right tool for a
particular task (sometimes you can get the same result working on the
config) or even just to explore what can be done with them.

As soon as I can allocate some time I'll try to add more features and more
examples in the Wiki.

Should you have any issue or suggestion on how to improve it please let me
know.

Thanks
Leonardo


On Mon, Sep 14, 2015 at 9:58 AM, Toke Eskildsen 
wrote:

> On Thu, 2015-09-10 at 14:38 +0200, Leonardo Foderaro wrote:
> > @AlbaPlugin(name="myPluginsLibrary")
> > public class MyPlugins {
> >
> > @DocTransformer(name="helloworld")
> >  public void hello(SolrDocument doc) {
> >  doc.setField("message", "Hello, World!");
> >  }
> >
> [... http://github.com/leonardofoderaro/]
>
> The is very timely for me, as I'll have to dig into Solr plugin writing
> before the year is over.
>
> > I still have many questions about Solr, but first I'd like to ask you
> > if you think it's a good idea. Any feedback is very welcome.
>
> I know very little writing plugins, so I am in no position to qualify
> how much alba helps with that: From what I can see in your GitHub
> repository, it seems very accessible though.
>
> Thank you for sharing,
> Toke Eskildsen, State and University Library, Denmark
>
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-09-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8048:
-
Issue Type: Improvement  (was: Bug)

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8053) support basic auth in SolrJ

2015-09-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-8053:


Assignee: Noble Paul

> support basic auth in SolrJ
> ---
>
> Key: SOLR-8053
> URL: https://issues.apache.org/jira/browse/SOLR-8053
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6489) Move span payloads to sandbox

2015-09-14 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6489:
--
Attachment: LUCENE-6489.patch

Patch, this time with everything compiling properly...  All tests pass.

> Move span payloads to sandbox
> -
>
> Key: LUCENE-6489
> URL: https://issues.apache.org/jira/browse/LUCENE-6489
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6489.patch, LUCENE-6489.patch
>
>
> As mentioned on LUCENE-6371:
> {noformat}
> I've marked the new classes and methods as lucene.experimental, rather than 
> moving to the sandbox - if anyone feels strongly about that, maybe it could 
> be done in a follow up issue.
> {noformat}
> I feel strongly about this and will do the move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Toke Eskildsen
On Mon, 2015-09-14 at 12:34 +0200, Leonardo Foderaro wrote:

> Should you have any issue or suggestion on how to improve it please
> let me know. 

I can explain my planned project, as it seems relevant in a broader
scope. Maybe you can tell me if such a project fits into your framework?


We have a SolrCloud setup with billions of documents, with 2-300M
documents in each shard. We need to define multiple "sub-corpora", with
a granularity that can be at single-document-level. In Solr-speak that
could be done with filters. A filter could be (id:1234 OR id:5678),
which is easy enough. But that does not scale to millions of IDs.

The idea is to introduce named filters, where the construction of the
filters themselves is done internally in Solr.

Creating a filter could be a call with a user-specified name (aka
filter-ID) and an URL to a filter-setup. The filter-setup would just be
a list of queries, one on each line
 id:1234
 id:5678
 domain:example.com
 id:7654
The lines are processed one at a time and each match is OR'ed to the
named filter being constructed. As this is a streaming process, there is
not real limit to the size.

Using a previously constructed named filter would (guessing here) be a
matter of writing a small alba-annotated class that takes the filter-ID
as input and returns the corresponding custom-made Filter, which really
is just a list of docIDs underneath (probably represented as a bitmap).


- Toke Eskildsen, State and University Library, Denmark




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8053) support basic auth in SolrJ

2015-09-14 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8053:


 Summary: support basic auth in SolrJ
 Key: SOLR-8053
 URL: https://issues.apache.org/jira/browse/SOLR-8053
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8053) support basic auth in SolrJ

2015-09-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8053:
-
Attachment: SOLR-8053.patch

> support basic auth in SolrJ
> ---
>
> Key: SOLR-8053
> URL: https://issues.apache.org/jira/browse/SOLR-8053
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8053.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-09-14 Thread Daniel Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743605#comment-14743605
 ] 

Daniel Davis commented on SOLR-8048:


True enough.

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-09-14 Thread Daniel Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743518#comment-14743518
 ] 

Daniel Davis commented on SOLR-8048:


{quote}
This is designed for the basic auth authentication scheme. For basic 
authentication, every request must carry the credentials in the header.
{quote}

Acknowledged.  Its also supposed to be over SSL, so its not too bad.

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-09-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7888:
--
Attachment: SOLR-7888.patch

Some details on this patch:
* Changed back to {{suggest.cfq}}
* Avoid test-schema changes, use {{my_contexts_t}} and {{my_contexts_s}} instead
* Skip (for now) the configurable contexts field name in Lucene classes
* Add the {{CONTEXTS_FIELD_NAME}} constant to AnalyzingInfixLookupFactory to 
avoid making the Lucene constant public
* Now we don't get Solr error code when searching with invalid field names, so 
I changed the test to instead verify that we get 0 hits.

Query analyser is now always created as
{code}contextFilterQueryAnalyzer = new TokenizerChain(new 
StandardTokenizerFactory(Collections.EMPTY_MAP), null);{code}
It seems to work well, however not sure if there are use cases where it don't 
align with the index-analyzer of the Suggester index.

> Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
> BooleanQuery filter parameter available in Solr
> --
>
> Key: SOLR-7888
> URL: https://issues.apache.org/jira/browse/SOLR-7888
> Project: Solr
>  Issue Type: New Feature
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Jan Høydahl
> Fix For: 5.4
>
> Attachments: SOLR-7888-7963.patch, SOLR-7888.patch
>
>
>  LUCENE-6464 has introduced a very flexible lookup method that takes as 
> parameter a BooleanQuery that is used for filtering results.
> This ticket is to expose that method to Solr.
> This would allow user to do:
> {code}
> /suggest?suggest=true=true=term=contexts:tennis
> /suggest?suggest=true=true=term=contexts:golf
>  AND contexts:football
> {code}
> etc
> Given that the context filtering in currently only implemented by the 
> {code}AnalyzingInfixSuggester{code} and by the 
> {code}BlendedInfixSuggester{code}, this initial implementation will support 
> only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-09-14 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743576#comment-14743576
 ] 

Noble Paul commented on SOLR-8048:
--

A ticket is a bug when it works wrongly. SolrJ works as advertised. BasicAuth 
is a new feature in the server and we are adding the support to client as well

However , please feel free to submit a patch

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2015-09-14 Thread Daniel Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743522#comment-14743522
 ] 

Daniel Davis commented on SOLR-8048:


[~noble.paul], I'd like to take this bug, but I'm not a committer.   Still, I 
see directly where in SolrCLI this would be added to HttpClient.   If you have 
a patch already, no reason to wait for me.   I will check this bug before 
uploading a patch.

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> It should be possible to pass the user name as a param say {{-user 
> solr:SolrRocks}} or alternately it should prompt for user name and password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743538#comment-14743538
 ] 

Robert Muir commented on LUCENE-6779:
-

I cant read the benchmarks (the table formatting is crazy) but now with this 
patch, there are allocations for every short string where there was not before. 
This should be fixed. 


> Reduce memory allocated by CompressingStoredFieldsWriter to write large 
> strings
> ---
>
> Key: LUCENE-6779
> URL: https://issues.apache.org/jira/browse/LUCENE-6779
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shalin Shekhar Mangar
> Attachments: LUCENE-6779.patch, LUCENE-6779.patch, 
> LUCENE-6779_alt.patch
>
>
> In SOLR-7927, I am trying to reduce the memory required to index very large 
> documents (between 10 to 100MB) and one of the places which allocate a lot of 
> heap is the UTF8 encoding in CompressingStoredFieldsWriter. The same problem 
> existed in JavaBinCodec and we reduced its memory allocation by falling back 
> to a double pass approach in SOLR-7971 when the utf8 size of the string is 
> greater than 64KB.
> I propose to make the same changes to CompressingStoredFieldsWriter as we 
> made to JavaBinCodec in SOLR-7971.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6489) Move span payloads to sandbox

2015-09-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743549#comment-14743549
 ] 

David Smiley commented on LUCENE-6489:
--

I guess if I were to enhance a highlighter to use SpanPayloadCollector, I'd 
need to make that extension in the sandbox then?  Just want to confirm.

> Move span payloads to sandbox
> -
>
> Key: LUCENE-6489
> URL: https://issues.apache.org/jira/browse/LUCENE-6489
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6489.patch, LUCENE-6489.patch
>
>
> As mentioned on LUCENE-6371:
> {noformat}
> I've marked the new classes and methods as lucene.experimental, rather than 
> moving to the sandbox - if anyone feels strongly about that, maybe it could 
> be done in a follow up issue.
> {noformat}
> I feel strongly about this and will do the move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743570#comment-14743570
 ] 

Shalin Shekhar Mangar commented on LUCENE-6779:
---

Sure, I'll add a test.

> Reduce memory allocated by CompressingStoredFieldsWriter to write large 
> strings
> ---
>
> Key: LUCENE-6779
> URL: https://issues.apache.org/jira/browse/LUCENE-6779
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shalin Shekhar Mangar
> Attachments: LUCENE-6779.patch, LUCENE-6779.patch, LUCENE-6779.patch, 
> LUCENE-6779_alt.patch
>
>
> In SOLR-7927, I am trying to reduce the memory required to index very large 
> documents (between 10 to 100MB) and one of the places which allocate a lot of 
> heap is the UTF8 encoding in CompressingStoredFieldsWriter. The same problem 
> existed in JavaBinCodec and we reduced its memory allocation by falling back 
> to a double pass approach in SOLR-7971 when the utf8 size of the string is 
> greater than 64KB.
> I propose to make the same changes to CompressingStoredFieldsWriter as we 
> made to JavaBinCodec in SOLR-7971.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Alexandre Rafalovitch
On 14 September 2015 at 07:55, Toke Eskildsen  wrote:
> The idea is to introduce named filters, where the construction of the
> filters themselves is done internally in Solr.

That would be a custom query parser, right? Just thinking out loud.

Regards,
   Alex.
P.s. Also, is this conversation suitable for DEV mailing list? Should
it move to User?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-6779:
--
Attachment: LUCENE-6779.patch

This patch is based on Robert's earlier patch but I fallback to the double pass 
approach only if string is larger than 64KB. This patch also moves 
GrowableByteArrayDataOutput from the util to codecs.compressing package as 
suggested.

I benchmarked both approaches (i.e. use double pass always vs use single pass 
below 64KB) against test data generated using 
TestUtil.randomRealisticUnicodeString between 5 and 64 characters) and for such 
short fields, double pass is approx 30% slower. I don't think short fields 
should pay this penalty considering those should be far more common.

{code}
testWriteString1 = Use double pass always
testWriteString2 = Use double pass if utf8 size is greater than 64KB
testWriteStringDefault = Use writeString from base DataOutput class

10K Randomly generated strings (5 <= len <= 64)
==
java -server -Xmx2048M -Xms2048M -Dtests.seed=18262 
-Dtests.datagen.path=./data.txt -Dtests.string.minlen=5 
-Dtests.string.maxlen=64 -Dtests.string.num=1 -jar target/benchmarks.jar 
-wi 5 -i 50 -gc true -f 2 -prof gc ".*GrowableByteArrayDataOutputBenchmark.*"

# Run complete. Total time: 00:06:41

Benchmark   
  Mode  CntScore   Error   Units
GrowableByteArrayDataOutputBenchmark.testWriteString1   
 thrpt  100  2916182.627 ±  5219.401   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate
 thrpt  1000.001 ± 0.001  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate.norm   
 thrpt  100   ≈ 10⁻⁴B/op
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.count 
 thrpt  100  ≈ 0  counts
GrowableByteArrayDataOutputBenchmark.testWriteString2   
 thrpt  100  4226084.451 ±  7188.594   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate
 thrpt  100  596.567 ± 1.016  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate.norm   
 thrpt  100  148.060 ± 0.001B/op
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.count 
 thrpt  100  ≈ 0  counts
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault 
 thrpt  100  4221729.873 ± 13558.316   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.alloc.rate  
 thrpt  100  595.961 ± 1.916  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.alloc.rate.norm 
 thrpt  100  148.060 ± 0.001B/op
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.count   
 thrpt  1001.000  counts
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.time
 thrpt  100   19.000  ms

10MB latin-1 field
=
java -server -Xmx2048M -Xms2048M -Dtests.seed=18262 -Dtests.string.num=0 
-Dtests.json.path=./input14.json -jar target/benchmarks.jar -wi 5 -i 50 -gc 
true -f 2 -prof gc ".*GrowableByteArrayDataOutputBenchmark.*

# Run complete. Total time: 00:06:47

Benchmark   
   Mode  Cnt ScoreError   Units
GrowableByteArrayDataOutputBenchmark.testWriteString1   
  thrpt  10027.985 ±  0.074   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate
  thrpt  100 0.001 ±  0.001  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate.norm   
  thrpt  10024.951 ± 20.652B/op
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.count 
  thrpt  100   ≈ 0   counts
GrowableByteArrayDataOutputBenchmark.testWriteString2   
  thrpt  10028.105 ±  0.090   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate
  thrpt  100 0.001 ±  0.001  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate.norm   
  thrpt  10024.888 ± 20.655B/op
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.count 
  thrpt  100   ≈ 0   counts
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault 
  thrpt  10036.185 ±  0.099   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.alloc.rate  
  thrpt  100  1123.864 ±  3.077  MB/sec

[jira] [Commented] (LUCENE-6489) Move span payloads to sandbox

2015-09-14 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743554#comment-14743554
 ] 

Alan Woodward commented on LUCENE-6489:
---

Or just write another implementation.  It's pretty lightweight...

> Move span payloads to sandbox
> -
>
> Key: LUCENE-6489
> URL: https://issues.apache.org/jira/browse/LUCENE-6489
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6489.patch, LUCENE-6489.patch
>
>
> As mentioned on LUCENE-6371:
> {noformat}
> I've marked the new classes and methods as lucene.experimental, rather than 
> moving to the sandbox - if anyone feels strongly about that, maybe it could 
> be done in a follow up issue.
> {noformat}
> I feel strongly about this and will do the move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 13928 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13928/
Java: 64bit/jdk1.9.0-ea-b78 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data/index.2015091414061,
 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data/index.20150914140111263]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data/index.2015091414061,
 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_AB78227117D6942D-001/solr-instance-004/./collection1/data/index.20150914140111263]
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([AB78227117D6942D:5C0BCC29D13E3BCB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:813)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:504)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-09-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743511#comment-14743511
 ] 

Jan Høydahl commented on SOLR-7888:
---

Please don't delete attachments when uploading new ones.
I think it makes sense to commit a version without localParams support first, 
as there are still some unresolved issues with that integration:
* Solr's Qparsers assume that you query the index specified in schema.xml, but 
we don't
* It is kind of a hack to force Lucene's AnalyzingSuggester to use same 
contexts field name as the source schema field name we pull data from - it 
satisfies QParser's need for a DF which exists in schema, but there are more 
problems:
* If the source fieldType in schema.xml is e.g. {{text}}, then that Analyser is 
used for query, with lowercasing etc. Problem is that the {{contexts}} field 
for the Suggester is *always* indexed as {{String}}, meaning that a source 
string "ABC" will not match a query "ABC" since it will be lowercased and match 
only "abc"

One solution is to extend Lucene's suggesters to be able to index contexts 
field with a custom analyzer, given in constructor. Then we could match things 
up and get it working. However, I don't like the hack of accidentally naming 
the two fields the same to get QParser working, so ideally we should then 
create a SuggesterQParser or something which accepts DF not in schema and is 
explicit about Analysers. But then letting people switch parser with localParam 
will bring them trouble again since that parser will require the field to exist 
in schema etc...

So for now let's analyze the context query as String, using Lucene's query 
parser, and leave to a future jira to add more flexibility. I'll upload a patch 
shortly.

> Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
> BooleanQuery filter parameter available in Solr
> --
>
> Key: SOLR-7888
> URL: https://issues.apache.org/jira/browse/SOLR-7888
> Project: Solr
>  Issue Type: New Feature
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Jan Høydahl
> Fix For: 5.4
>
> Attachments: SOLR-7888-7963.patch
>
>
>  LUCENE-6464 has introduced a very flexible lookup method that takes as 
> parameter a BooleanQuery that is used for filtering results.
> This ticket is to expose that method to Solr.
> This would allow user to do:
> {code}
> /suggest?suggest=true=true=term=contexts:tennis
> /suggest?suggest=true=true=term=contexts:golf
>  AND contexts:football
> {code}
> etc
> Given that the context filtering in currently only implemented by the 
> {code}AnalyzingInfixSuggester{code} and by the 
> {code}BlendedInfixSuggester{code}, this initial implementation will support 
> only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743566#comment-14743566
 ] 

Shalin Shekhar Mangar commented on LUCENE-6779:
---

In these results, testWriteString3 is the one which uses scratch bytes to 
encode small strings:
{code}
java -server -Xmx2048M -Xms2048M -Dtests.seed=18262 
-Dtests.datagen.path=./data.txt -Dtests.string.minlen=5 
-Dtests.string.maxlen=64 -Dtests.string.num=1 -jar target/benchmarks.jar 
-wi 5 -i 50 -gc true -f 2 -prof gc ".*GrowableByteArrayDataOutputBenchmark.*"

# Run complete. Total time: 00:08:55

Benchmark   
   Mode  CntScore   Error   Units
GrowableByteArrayDataOutputBenchmark.testWriteString1   
  thrpt  100  2915687.179 ±  4981.266   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate
  thrpt  1000.001 ± 0.001  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.alloc.rate.norm   
  thrpt  100   ≈ 10⁻⁴B/op
GrowableByteArrayDataOutputBenchmark.testWriteString1:·gc.count 
  thrpt  100  ≈ 0  counts
GrowableByteArrayDataOutputBenchmark.testWriteString2   
  thrpt  100  4216428.245 ±  7822.793   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate
  thrpt  100  595.210 ± 1.103  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.alloc.rate.norm   
  thrpt  100  148.060 ± 0.001B/op
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.count 
  thrpt  1001.000  counts
GrowableByteArrayDataOutputBenchmark.testWriteString2:·gc.time  
  thrpt  1002.000  ms
GrowableByteArrayDataOutputBenchmark.testWriteString3   
  thrpt  100  4581362.617 ± 11683.766   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteString3:·gc.alloc.rate
  thrpt  1000.001 ± 0.001  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteString3:·gc.alloc.rate.norm   
  thrpt  100   ≈ 10⁻⁴B/op
GrowableByteArrayDataOutputBenchmark.testWriteString3:·gc.count 
  thrpt  100  ≈ 0  counts
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault 
  thrpt  100  4277669.423 ± 19742.963   ops/s
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.alloc.rate  
  thrpt  100  603.855 ± 2.787  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.alloc.rate.norm 
  thrpt  100  148.060 ± 0.001B/op
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.churn.PS_Eden_Space
   thrpt  1005.947 ±20.169  MB/sec
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.churn.PS_Eden_Space.norm
  thrpt  1001.462 ± 4.958B/op
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.count   
  thrpt  1002.000  counts
GrowableByteArrayDataOutputBenchmark.testWriteStringDefault:·gc.time
  thrpt  1003.000  ms
{code}

> Reduce memory allocated by CompressingStoredFieldsWriter to write large 
> strings
> ---
>
> Key: LUCENE-6779
> URL: https://issues.apache.org/jira/browse/LUCENE-6779
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shalin Shekhar Mangar
> Attachments: LUCENE-6779.patch, LUCENE-6779.patch, LUCENE-6779.patch, 
> LUCENE-6779_alt.patch
>
>
> In SOLR-7927, I am trying to reduce the memory required to index very large 
> documents (between 10 to 100MB) and one of the places which allocate a lot of 
> heap is the UTF8 encoding in CompressingStoredFieldsWriter. The same problem 
> existed in JavaBinCodec and we reduced its memory allocation by falling back 
> to a double pass approach in SOLR-7971 when the utf8 size of the string is 
> greater than 64KB.
> I propose to make the same changes to CompressingStoredFieldsWriter as we 
> made to JavaBinCodec in SOLR-7971.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743567#comment-14743567
 ] 

Robert Muir commented on LUCENE-6779:
-

OK this is starting to look good. I personally don't mind the complexity (even 
though I feel huge documents is not really a use case, here is your search 
result, somewhere in this 10MB document), as long as we add unit tests for this 
output class that stress the string writing.

The thing is, otherwise that logic is basically untested, because tests rarely 
make long strings. That is ripe for bugs now or in the future.

> Reduce memory allocated by CompressingStoredFieldsWriter to write large 
> strings
> ---
>
> Key: LUCENE-6779
> URL: https://issues.apache.org/jira/browse/LUCENE-6779
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shalin Shekhar Mangar
> Attachments: LUCENE-6779.patch, LUCENE-6779.patch, LUCENE-6779.patch, 
> LUCENE-6779_alt.patch
>
>
> In SOLR-7927, I am trying to reduce the memory required to index very large 
> documents (between 10 to 100MB) and one of the places which allocate a lot of 
> heap is the UTF8 encoding in CompressingStoredFieldsWriter. The same problem 
> existed in JavaBinCodec and we reduced its memory allocation by falling back 
> to a double pass approach in SOLR-7971 when the utf8 size of the string is 
> greater than 64KB.
> I propose to make the same changes to CompressingStoredFieldsWriter as we 
> made to JavaBinCodec in SOLR-7971.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6779) Reduce memory allocated by CompressingStoredFieldsWriter to write large strings

2015-09-14 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-6779:
--
Attachment: LUCENE-6779.patch

This patch adds scratch bytes to GrowableArrayDataOutput itself to reduce 
allocations for small strings.

> Reduce memory allocated by CompressingStoredFieldsWriter to write large 
> strings
> ---
>
> Key: LUCENE-6779
> URL: https://issues.apache.org/jira/browse/LUCENE-6779
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shalin Shekhar Mangar
> Attachments: LUCENE-6779.patch, LUCENE-6779.patch, LUCENE-6779.patch, 
> LUCENE-6779_alt.patch
>
>
> In SOLR-7927, I am trying to reduce the memory required to index very large 
> documents (between 10 to 100MB) and one of the places which allocate a lot of 
> heap is the UTF8 encoding in CompressingStoredFieldsWriter. The same problem 
> existed in JavaBinCodec and we reduced its memory allocation by falling back 
> to a double pass approach in SOLR-7971 when the utf8 size of the string is 
> greater than 64KB.
> I propose to make the same changes to CompressingStoredFieldsWriter as we 
> made to JavaBinCodec in SOLR-7971.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-09-14 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6780:
---
Attachment: LUCENE-6780.patch

Next iteration patch off the feature branch.

bq. OK I committed a new randomized test 

The test was buggy using maxLon where it expected minLon.  I also updated it to 
handle the case where the circle is fully contained by the rectangle.  All 
beasting passed.

bq. I'm seeing very slow query execution times

I had noticed this before posting the initial patch and isolated one test that 
took ~45 secs to complete. The slow times were related to recomputing the bbox 
and high resolution ranges for every segment on very large distance queries. 
The new patch relaxes the distance criteria for the range resolution.  There 
are still some boundary outliers that take ~15 seconds (instead of 60+) to 
complete.  I can further improve this by optimizing the 
GeoPointDistanceQuery.computeBBox method... or, what are your thoughts on 
computing the BBox and reusing across segments??

bq. I still think there's really a bug here  I committed a failing test on the 
branch

bleh...indeed!! Fixed it up.

> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2015-09-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743672#comment-14743672
 ] 

Shawn Heisey commented on SOLR-5209:


On IRC today:

{noformat}
09:14 < yriveiro> Hi, I unloaded by accident the las replica of a shard in a
  collection
09:14 < yriveiro> How can I recreate the shard?
{noformat}


> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8046) HdfsCollectionsAPIDistributedZkTest checks that no transaction logs failed to be opened during the test but does not isolate this to the test and could fail due to other t

2015-09-14 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8046:
--
Attachment: SOLR-8046.patch

> HdfsCollectionsAPIDistributedZkTest checks that no transaction logs failed to 
> be opened during the test but does not isolate this to the test and could 
> fail due to other tests.
> 
>
> Key: SOLR-8046
> URL: https://issues.apache.org/jira/browse/SOLR-8046
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-8046.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 956 - Still Failing

2015-09-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/956/

2 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2758, name=collection3, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2758, name=collection3, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38206: Could not find collection : 
awholynewstresscollection_collection3_0
at __randomizedtesting.SeedInfo.seed([25F1FD9DB73A8F95]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)


FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9518, name=collection5, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9518, name=collection5, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:52357/_gha/f: collection already exists: 
awholynewstresscollection_collection5_0
at __randomizedtesting.SeedInfo.seed([25F1FD9DB73A8F95]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1574)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:888)




Build Log:
[...truncated 10342 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest_25F1FD9DB73A8F95-001/init-core-data-001
   [junit4]   2> 130268 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[25F1FD9DB73A8F95]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 130268 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[25F1FD9DB73A8F95]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 131013 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[25F1FD9DB73A8F95]-worker) [
] o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 132289 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[25F1FD9DB73A8F95]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 

[jira] [Resolved] (LUCENE-6789) change IndexSearcher default similarity to BM25

2015-09-14 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6789.
-
Resolution: Fixed

Thanks Adrien: I applied those fixes.

> change IndexSearcher default similarity to BM25
> ---
>
> Key: LUCENE-6789
> URL: https://issues.apache.org/jira/browse/LUCENE-6789
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6789.patch
>
>
> Since Lucene 4.0, the statistics needed for this are always present, so we 
> can make the change without any degradation.
> I think the change should be a 6.0 change only: it will prevent any 
> surprises. DefaultSimilarity is renamed to ClassicSimilarity to prevent 
> confusion. No indexing change is needed as we use the same norm format, its 
> just a runtime switch. Users can just do IndexSearcher.setSimilarity(new 
> ClassicSimilarity()) to get the old behavior.  I did not change solr's 
> default here, I think that should be a separate issue, since it has more 
> concerns: e.g. factories in configuration files and so on.
> One issue was the generation of synonym queries (posinc=0) by QueryBuilder 
> (used by parsers). This is kind of a corner case (query-time synonyms), but 
> we should make it nicer. The current code in trunk disables coord, which 
> makes no sense for anything but the vector space impl. Instead, this patch 
> adds a SynonymQuery which treats occurrences of any term as a single 
> pseudoterm. With english wordnet as a query-time synonym dict, this query 
> gives 12% improvement in MAP for title queries on BM25, and 2% with Classic 
> (not significant). So its a better generic approach for synonyms that works 
> with all scoring models.
> I wanted to use BlendedTermQuery, but it seems to have problems at a glance, 
> it tries to "take on the world", it has problems like not working with 
> distributed scoring (doesn't consult indexsearcher for stats). Anyway this 
> one is a different, simpler approach, which only works for a single field, 
> and which calls tf(sum) a single time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6789) change IndexSearcher default similarity to BM25

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744370#comment-14744370
 ] 

ASF subversion and git services commented on LUCENE-6789:
-

Commit 1703070 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1703070 ]

LUCENE-6789: change IndexSearcher default similarity to BM25

> change IndexSearcher default similarity to BM25
> ---
>
> Key: LUCENE-6789
> URL: https://issues.apache.org/jira/browse/LUCENE-6789
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6789.patch
>
>
> Since Lucene 4.0, the statistics needed for this are always present, so we 
> can make the change without any degradation.
> I think the change should be a 6.0 change only: it will prevent any 
> surprises. DefaultSimilarity is renamed to ClassicSimilarity to prevent 
> confusion. No indexing change is needed as we use the same norm format, its 
> just a runtime switch. Users can just do IndexSearcher.setSimilarity(new 
> ClassicSimilarity()) to get the old behavior.  I did not change solr's 
> default here, I think that should be a separate issue, since it has more 
> concerns: e.g. factories in configuration files and so on.
> One issue was the generation of synonym queries (posinc=0) by QueryBuilder 
> (used by parsers). This is kind of a corner case (query-time synonyms), but 
> we should make it nicer. The current code in trunk disables coord, which 
> makes no sense for anything but the vector space impl. Instead, this patch 
> adds a SynonymQuery which treats occurrences of any term as a single 
> pseudoterm. With english wordnet as a query-time synonym dict, this query 
> gives 12% improvement in MAP for title queries on BM25, and 2% with Classic 
> (not significant). So its a better generic approach for synonyms that works 
> with all scoring models.
> I wanted to use BlendedTermQuery, but it seems to have problems at a glance, 
> it tries to "take on the world", it has problems like not working with 
> distributed scoring (doesn't consult indexsearcher for stats). Anyway this 
> one is a different, simpler approach, which only works for a single field, 
> and which calls tf(sum) a single time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8054) Add a GET command to ConfigSets API

2015-09-14 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8054:


 Summary: Add a GET command to ConfigSets API
 Key: SOLR-8054
 URL: https://issues.apache.org/jira/browse/SOLR-8054
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan


It would be useful to have a command that allows you to view a ConfigSet via 
the API rather than going to zookeeper directly.  Mainly for security reasons, 
e.g.
- solr may have different security requirements than the ZNodes e.g. only solr 
can view znodes but any authenticated user can call ConfigSet API
- it's nicer than pointing to the web UI and using the zookeeper viewer, 
because of the same security concerns as above and that you don't have to know 
the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744288#comment-14744288
 ] 

ASF subversion and git services commented on LUCENE-6780:
-

Commit 1703062 from [~mikemccand] in branch 'dev/branches/lucene6780'
[ https://svn.apache.org/r1703062 ]

LUCENE-6780: fix test bug; fix bug in closestPoinOnBBox; add some nocommits

> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



License in Solr

2015-09-14 Thread vetrik kumaran murugesan
Hi Team,


I see GPL and MIT license used in SOLR. So apache 2.0 holds good for all
underlying licenses?

example  :  junit4-ant 2.1.13 has  GPL and MIT .



Please let me know if this is the aproriate group who should I send these
questions to..

Regards,

Vetrikkumaran Murugesan


Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Leonardo Foderaro
thank you for sharing, it looks like a challenging project.
I'm not sure if alba could be the right tool
but if you want to give it a try for a simple proof-of-concept
I will gladly help you to decide if it can be.
I also agree with Alexandre, I'm not sure if this thread
 is more appropriate here on the dev or on the users list,
eventually we can continue it there.

thanks
leo

On Mon, Sep 14, 2015 at 4:27 PM, Alexandre Rafalovitch 
wrote:

> On 14 September 2015 at 07:55, Toke Eskildsen 
> wrote:
> > The idea is to introduce named filters, where the construction of the
> > filters themselves is done internally in Solr.
>
> That would be a custom query parser, right? Just thinking out loud.
>
> Regards,
>Alex.
> P.s. Also, is this conversation suitable for DEV mailing list? Should
> it move to User?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 53 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/53/
Java: multiarch/jdk1.7.0 -d64 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([E16A6EF5DB6D4A08:5BB8018D5843A41D]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:start=0=standard=id:529=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10262 lines...]
   [junit4] Suite: 

[jira] [Updated] (SOLR-7995) Add a LIST command to ConfigSets API

2015-09-14 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7995:
-
Issue Type: New Feature  (was: Bug)

> Add a LIST command to ConfigSets API
> 
>
> Key: SOLR-7995
> URL: https://issues.apache.org/jira/browse/SOLR-7995
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a LIST command in the ConfigSets API so that 
> clients do not have to access zookeeper in order to get the ConfigSets to use 
> for the other operations (create, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: License in Solr

2015-09-14 Thread Upayavira
On Mon, Sep 14, 2015, at 10:41 PM, vetrik kumaran murugesan wrote:
> Hi Team,
>
>
> I see GPL and MIT license used in SOLR. So apache 2.0 holds good for
> all underlying licenses?
>
> example  :  junit4-ant 2.1.13 has  GPL and MIT .
>
>
>
> Please let me know if this is the aproriate group who should I send
> these questions to..

Can you give a specific file in a specific version of Solr? An Apache
project should not be shipping GPL code. I would be exceptionally
surprised were this the case. And anyway, the file you refer to is
Apache Licensed as it comes from the Apache Ant project.

Provide links to the license file you are taking issue with, and perhaps
we can help you understand it better.

Upayavira


Re: License in Solr

2015-09-14 Thread vetrik kumaran murugesan
Thanks Upayavira,

I was using apache solr 5.3.0.

2015-09-14 17:28 GMT-05:00 Upayavira :

> On Mon, Sep 14, 2015, at 10:41 PM, vetrik kumaran murugesan wrote:
>
> Hi Team,
>
>
> I see GPL and MIT license used in SOLR. So apache 2.0 holds good for all
> underlying licenses?
>
> example  :  junit4-ant 2.1.13 has  GPL and MIT .
>
>
>
> Please let me know if this is the aproriate group who should I send these
> questions to..
>
>
> Can you give a specific file in a specific version of Solr? An Apache
> project should not be shipping GPL code. I would be exceptionally surprised
> were this the case. And anyway, the file you refer to is Apache Licensed as
> it comes from the Apache Ant project.
>
> Provide links to the license file you are taking issue with, and perhaps
> we can help you understand it better.
>
> Upayavira
>


[jira] [Commented] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-09-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744297#comment-14744297
 ] 

Michael McCandless commented on LUCENE-6780:


bq. The test was buggy using maxLon where it expected minLon

Duh!  Thanks for fixing :)

I committed the patch w/ some minor code style changes, but added some 
nocommits, e.g. I'm not sure how {{circleFullInside}} testing helps since it 
seems to always assert to true in that case.

I'll beast the new test!

> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5147) Support child documents in DIH

2015-09-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744852#comment-14744852
 ] 

Mikhail Khludnev commented on SOLR-5147:


I see a minor usability issue: nested documents aren't shown when debugging DIH 
in SolrAdmin, because {{SolrInputDocument}} is rendered as {{Map}} w/o 
children. It's a simple issue, raise if you wish it to be addressed.

> Support child documents in DIH
> --
>
> Key: SOLR-5147
> URL: https://issues.apache.org/jira/browse/SOLR-5147
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Vadim Kirilchuk
>Assignee: Noble Paul
> Fix For: 5.1, Trunk
>
> Attachments: SOLR-5147-5x.patch, SOLR-5147.patch, SOLR-5147.patch, 
> dih-oome-fix.patch
>
>
> DIH should be able to index hierarchical documents, i.e. it should be able to 
> work with SolrInputDocuments#addChildDocument.
> There was patch in SOLR-3076: 
> https://issues.apache.org/jira/secure/attachment/12576960/dih-3076.patch
> But it is not uptodate and far from being complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3523 - Failure

2015-09-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3523/

1 tests failed.
REGRESSION:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([2F8A1CCC80D81627:955873B403F6F832]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:rows=20=0=id:529=2.2=standard
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10082 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-6804) TestIndexWriterOutOfFileDescriptors nightly failure

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744636#comment-14744636
 ] 

ASF subversion and git services commented on LUCENE-6804:
-

Commit 1703081 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1703081 ]

LUCENE-6804: fix test bug, to properly handle tragic merge exceptions

> TestIndexWriterOutOfFileDescriptors nightly failure
> ---
>
> Key: LUCENE-6804
> URL: https://issues.apache.org/jira/browse/LUCENE-6804
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Steve Rowe
>
> My Jenkins found a seed 
> [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-5.x-Java8/16] that 
> reproduces for me about 5% of the time via beasting:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOutOfFileDescriptors -Dtests.method=test 
> -Dtests.seed=AE5CB745082C8FEA -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=tr -Dtests.timezone=Europe/Copenhagen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.74s J6  | TestIndexWriterOutOfFileDescriptors.test <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot commit
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([AE5CB745082C8FEA:2608889FA6D0E212]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2775)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2961)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1080)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1123)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOutOfFileDescriptors.test(TestIndexWriterOutOfFileDescriptors.java:87)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_c.cfs)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:458)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:635)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:71)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:93)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:65)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4001)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3648)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-5.x-Java8/workspace/lucene/build/core/test/J6/temp/lucene.index.TestIndexWriterOutOfFileDescriptors_AE5CB745082C8FEA-001
>[junit4]   2> NOTE: test params are: codec=Lucene53, 
> sim=DefaultSimilarity, locale=tr, timezone=Europe/Copenhagen
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=311559920,total=328728576
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6804) TestIndexWriterOutOfFileDescriptors nightly failure

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744639#comment-14744639
 ] 

ASF subversion and git services commented on LUCENE-6804:
-

Commit 1703082 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1703082 ]

LUCENE-6804: fix test bug, to properly handle tragic merge exceptions

> TestIndexWriterOutOfFileDescriptors nightly failure
> ---
>
> Key: LUCENE-6804
> URL: https://issues.apache.org/jira/browse/LUCENE-6804
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Steve Rowe
>
> My Jenkins found a seed 
> [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-5.x-Java8/16] that 
> reproduces for me about 5% of the time via beasting:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOutOfFileDescriptors -Dtests.method=test 
> -Dtests.seed=AE5CB745082C8FEA -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=tr -Dtests.timezone=Europe/Copenhagen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.74s J6  | TestIndexWriterOutOfFileDescriptors.test <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot commit
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([AE5CB745082C8FEA:2608889FA6D0E212]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2775)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2961)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1080)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1123)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOutOfFileDescriptors.test(TestIndexWriterOutOfFileDescriptors.java:87)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_c.cfs)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:458)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:635)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:71)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:93)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:65)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4001)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3648)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-5.x-Java8/workspace/lucene/build/core/test/J6/temp/lucene.index.TestIndexWriterOutOfFileDescriptors_AE5CB745082C8FEA-001
>[junit4]   2> NOTE: test params are: codec=Lucene53, 
> sim=DefaultSimilarity, locale=tr, timezone=Europe/Copenhagen
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=311559920,total=328728576
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8047) large longs is being saved incorrect

2015-09-14 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744772#comment-14744772
 ] 

Erick Erickson commented on SOLR-8047:
--

Please bring this kind of thing up on the user's lit first, as it tends to get 
more eyeballs. In this case I vaguely recall something similar that indicated 
this was a browser issue rather than a problem with Solr. Since you're using 
SolrJ, you can test whether this is the case pretty easily by querying for the 
document and examining the results. I'd query two ways
1> query the doc ID and examine the long field
2> query for the value as you indexed it

In fact this latter could be done from a URL, just 
q=source_raw_hash:3954983690244748504 and see if the ID of the doc in question 
comes back. It'd be particularly indicative if the browser display displayed 
3954983690244748300

> large longs is being saved incorrect
> 
>
> Key: SOLR-8047
> URL: https://issues.apache.org/jira/browse/SOLR-8047
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.3
>Reporter: Manhal
>
> I have a solr schema with a field in long type
> 
> the long type in the schema is the default:
>  positionIncrementGap="0"/>
> I am saving to the index in solrJ, the value: 3954983690244748504 to this 
> field, but it's being saved as 3954983690244748300
> I am having the same with different large values.
> I also tested it from admin UI, adding the same long and it's being saved 
> incorrect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Erik Hatcher
Toke - this (named filters that can be combined in boolean expressions) sounds 
like https://issues.apache.org/jira/browse/SOLR-7276 
 - whatcha think?

Erik




> On Sep 14, 2015, at 7:55 AM, Toke Eskildsen  wrote:
> 
> On Mon, 2015-09-14 at 12:34 +0200, Leonardo Foderaro wrote:
> 
>> Should you have any issue or suggestion on how to improve it please
>> let me know. 
> 
> I can explain my planned project, as it seems relevant in a broader
> scope. Maybe you can tell me if such a project fits into your framework?
> 
> 
> We have a SolrCloud setup with billions of documents, with 2-300M
> documents in each shard. We need to define multiple "sub-corpora", with
> a granularity that can be at single-document-level. In Solr-speak that
> could be done with filters. A filter could be (id:1234 OR id:5678),
> which is easy enough. But that does not scale to millions of IDs.
> 
> The idea is to introduce named filters, where the construction of the
> filters themselves is done internally in Solr.
> 
> Creating a filter could be a call with a user-specified name (aka
> filter-ID) and an URL to a filter-setup. The filter-setup would just be
> a list of queries, one on each line
> id:1234
> id:5678
> domain:example.com
> id:7654
> The lines are processed one at a time and each match is OR'ed to the
> named filter being constructed. As this is a streaming process, there is
> not real limit to the size.
> 
> Using a previously constructed named filter would (guessing here) be a
> matter of writing a small alba-annotated class that takes the filter-ID
> as input and returns the corresponding custom-made Filter, which really
> is just a list of docIDs underneath (probably represented as a bitmap).
> 
> 
> - Toke Eskildsen, State and University Library, Denmark
> 
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Resolved] (LUCENE-6804) TestIndexWriterOutOfFileDescriptors nightly failure

2015-09-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6804.

   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

Thanks [~sar...@syr.edu]!

> TestIndexWriterOutOfFileDescriptors nightly failure
> ---
>
> Key: LUCENE-6804
> URL: https://issues.apache.org/jira/browse/LUCENE-6804
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Steve Rowe
> Fix For: Trunk, 5.4
>
>
> My Jenkins found a seed 
> [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-5.x-Java8/16] that 
> reproduces for me about 5% of the time via beasting:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOutOfFileDescriptors -Dtests.method=test 
> -Dtests.seed=AE5CB745082C8FEA -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=tr -Dtests.timezone=Europe/Copenhagen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.74s J6  | TestIndexWriterOutOfFileDescriptors.test <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot commit
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([AE5CB745082C8FEA:2608889FA6D0E212]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2775)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2961)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1080)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1123)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOutOfFileDescriptors.test(TestIndexWriterOutOfFileDescriptors.java:87)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_c.cfs)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:458)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:635)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:71)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:93)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:65)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4001)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3648)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-5.x-Java8/workspace/lucene/build/core/test/J6/temp/lucene.index.TestIndexWriterOutOfFileDescriptors_AE5CB745082C8FEA-001
>[junit4]   2> NOTE: test params are: codec=Lucene53, 
> sim=DefaultSimilarity, locale=tr, timezone=Europe/Copenhagen
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=311559920,total=328728576
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2676 - Still Failing!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2676/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=10023, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=10023, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:56525/qw: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([7F8823C02409DBC0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 10252 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest_7F8823C02409DBC0-001/init-core-data-001
   [junit4]   2> 1284347 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[7F8823C02409DBC0]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 1284347 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[7F8823C02409DBC0]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /qw/
   [junit4]   2> 1284350 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1284350 INFO  (Thread-3557) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1284350 INFO  (Thread-3557) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1284451 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.ZkTestServer start zk server on port:56418
   [junit4]   2> 1284451 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1284452 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1284458 INFO  (zkCallback-1514-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@762c76b6 
name:ZooKeeperConnection Watcher:127.0.0.1:56418 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1284458 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1284459 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1284459 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1284463 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1284464 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1284465 INFO  (zkCallback-1515-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@56ad1c7b 
name:ZooKeeperConnection Watcher:127.0.0.1:56418/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1284465 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[7F8823C02409DBC0]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 

Re: License in Solr

2015-09-14 Thread Upayavira



On Mon, Sep 14, 2015, at 11:28 PM, Upayavira wrote:
> On Mon, Sep 14, 2015, at 10:41 PM, vetrik kumaran murugesan wrote:
>> Hi Team,
>>
>>
>> I see GPL and MIT license used in SOLR. So apache 2.0 holds good for
>> all underlying licenses?
>>
>> example  :  junit4-ant 2.1.13 has  GPL and MIT .
>>
>>
>>
>> Please let me know if this is the aproriate group who should I send
>> these questions to..
>
> Can you give a specific file in a specific version of Solr? An Apache
> project should not be shipping GPL code. I would be exceptionally
> surprised were this the case. And anyway, the file you refer to is
> Apache Licensed as it comes from the Apache Ant project.
>
> Provide links to the license file you are taking issue with, and
> perhaps we can help you understand it better.

Also, all references I can see to GPL in this directory:

http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/licenses/

are for dependencies that are dual-licensed. Apache is consuming them on
the basis of the other (non-GPL) license, but reporting both licenses
for the benefit of downstream consumers who may choose to use it on the
basis of the GPL license instead.

Upayavira


Re: Introducing Alba, a small framework to simplify Solr plugins development

2015-09-14 Thread Toke Eskildsen
On Thu, 2015-09-10 at 14:38 +0200, Leonardo Foderaro wrote:
> @AlbaPlugin(name="myPluginsLibrary")
> public class MyPlugins {
> 
> @DocTransformer(name="helloworld")
>  public void hello(SolrDocument doc) {
>  doc.setField("message", "Hello, World!");
>  }
> 
[... http://github.com/leonardofoderaro/]

The is very timely for me, as I'll have to dig into Solr plugin writing
before the year is over. 

> I still have many questions about Solr, but first I'd like to ask you
> if you think it's a good idea. Any feedback is very welcome.

I know very little writing plugins, so I am in no position to qualify
how much alba helps with that: From what I can see in your GitHub
repository, it seems very accessible though.

Thank you for sharing,
Toke Eskildsen, State and University Library, Denmark





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails

2015-09-14 Thread Burkhard Buelte (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743947#comment-14743947
 ] 

Burkhard Buelte commented on SOLR-8050:
---

To my research the value.toString  
line 715 in TrieField.createField  (see screenshot-1.png) is the cause, where 
value is of type Date and toString not expected DateString format.

> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
>   at 

[jira] [Updated] (SOLR-8050) Partial update on document with multivalued date field fails

2015-09-14 Thread Burkhard Buelte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Burkhard Buelte updated SOLR-8050:
--
Attachment: screenshot-1.png

> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:191)
> P.S. the line "solr.update" takes care to create a partial update document, 
> with proper {"set":[fieldname:value]}



--
This message was sent by Atlassian JIRA

Having trouble importing PyLucene packages

2015-09-14 Thread Blaž Štempelj
Dear PyLucene development team,

I am writing to you because I have problem importing packages into my
project. All the details are already described in this StackOverflow
question:

http://stackoverflow.com/questions/32537381/importing-packages-from-pylucene-does-not-work

I look forward to hearing from you.

--

Sincerely

Blaž Štempelj


[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2675 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2675/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteShardTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:55836/qe_/sq

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:55836/qe_/sq
at 
__randomizedtesting.SeedInfo.seed([ACD0D80D618DC84B:2484E7D7CF71A5B3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:572)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.DeleteShardTest.deleteShard(DeleteShardTest.java:122)
at org.apache.solr.cloud.DeleteShardTest.test(DeleteShardTest.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-7932) Solr replication relies on timestamps to sync across machines

2015-09-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744143#comment-14744143
 ] 

Mark Miller commented on SOLR-7932:
---

I think when in SolrCloud mode perhaps it's just best to always replicate and 
count on peer sync as the short circuit. If replication is really not needed, 
most of the work will be skipped properly via filenames and checksums anyway, 
rather than this sloppy way that may miss a replication in rare cases.

> Solr replication relies on timestamps to sync across machines
> -
>
> Key: SOLR-7932
> URL: https://issues.apache.org/jira/browse/SOLR-7932
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Ramkumar Aiyengar
> Attachments: SOLR-7932.patch, SOLR-7932.patch
>
>
> Spinning off SOLR-7859, noticed there that wall time recorded as commit data 
> on a commit to check if replication needs to be done. In IndexFetcher, there 
> is this code:
> {code}
>   if (!forceReplication && 
> IndexDeletionPolicyWrapper.getCommitTimestamp(commit) == latestVersion) {
> //master and slave are already in sync just return
> LOG.info("Slave in sync with master.");
> successfulInstall = true;
> return true;
>   }
> {code}
> It appears as if we are checking wall times across machines to check if we 
> are in sync, this could go wrong.
> Once a decision is made to replicate, we do seem to use generations instead, 
> except for this place below checks both generations and timestamps to see if 
> a full copy is needed..
> {code}
>   // if the generation of master is older than that of the slave , it 
> means they are not compatible to be copied
>   // then a new index directory to be created and all the files need to 
> be copied
>   boolean isFullCopyNeeded = IndexDeletionPolicyWrapper
>   .getCommitTimestamp(commit) >= latestVersion
>   || commit.getGeneration() >= latestGeneration || forceReplication;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2727 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2727/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([981C1633367404D3:D6BF63E027AF15C3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS-EA] Lucene-Solr-5.3-Linux (32bit/jdk1.9.0-ea-b78) - Build # 163 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/163/
Java: 32bit/jdk1.9.0-ea-b78 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=210, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=214, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=213, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=211, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=212, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=210, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=214, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 

[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-14 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14742980#comment-14742980
 ] 

Modassar Ather commented on LUCENE-5205:


Got the point. I think LUCENE-6796 will take care of this issue.

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8051) Global stats NPE

2015-09-14 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743183#comment-14743183
 ] 

Markus Jelsma commented on SOLR-8051:
-

102:  // TODO: nl == null if not all shards respond (no server hosting 
shard)
103:  String termStatsString = (String) nl.get(TERM_STATS_KEY);

> Global stats NPE
> 
>
> Key: SOLR-8051
> URL: https://issues.apache.org/jira/browse/SOLR-8051
> Project: Solr
>  Issue Type: Bug
>Reporter: Markus Jelsma
>
> NPE is thrown when not all cores are up.
> {code}
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.search.stats.ExactStatsCache.mergeToGlobalStats(ExactStatsCache.java:103)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:846)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:758)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:410)
>   at 
> io.openindex.solr.handler.SitesearchSearchHandler.handleRequestBody(SitesearchSearchHandler.java:43)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8052) Kerberos auth plugin does not work with Java 9 Jigsaw

2015-09-14 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8052:

Component/s: (was: hentication)
 Authentication

> Kerberos auth plugin does not work with Java 9 Jigsaw
> -
>
> Key: SOLR-8052
> URL: https://issues.apache.org/jira/browse/SOLR-8052
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3
>Reporter: Uwe Schindler
>
> As described in my status update yesterday, there are some problems in 
> dependencies shipped with Solr that don't work with Java 9 Jigsaw builds.
> org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider
> {noformat}
>[junit4]> Throwable #1: java.lang.RuntimeException: 
> java.lang.IllegalAccessException: Class org.apache.hadoop.minikdc.MiniKdc can 
> not access a member of class sun.security.krb5.Config (module 
> java.security.jgss) with modifiers "public static", module java.security.jgss 
> does not export sun.security.krb5 to 
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:211)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:81)
>[junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
>[junit4]> Caused by: java.lang.IllegalAccessException: Class 
> org.apache.hadoop.minikdc.MiniKdc can not access a member of class 
> sun.security.krb5.Config (module java.security.jgss) with modifiers "public 
> static", module java.security.jgss does not export sun.security.krb5 to 
> 
>[junit4]>at 
> java.lang.reflect.AccessibleObject.slowCheckMemberAccess(java.base@9.0/AccessibleObject.java:384)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkAccess(java.base@9.0/AccessibleObject.java:376)
>[junit4]>at 
> org.apache.hadoop.minikdc.MiniKdc.initKDCServer(MiniKdc.java:478)
>[junit4]>at 
> org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:320)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:204)
>[junit4]>... 38 moreThrowable #2: 
> java.lang.NullPointerException
>[junit4]>at 
> org.apache.solr.cloud.ZkTestServer$ZKServerMain.shutdown(ZkTestServer.java:334)
>[junit4]>at 
> org.apache.solr.cloud.ZkTestServer.shutdown(ZkTestServer.java:526)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.shutdown(SaslZkACLProviderTest.java:218)
>[junit4]>at 
> org.apache.solr.cloud.SaslZkACLProviderTest.tearDown(SaslZkACLProviderTest.java:116)
>[junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
> {noformat}
> This is really bad, bad, bad! All security related stuff should never ever be 
> reflected on!
> So we have to open issue in MiniKdc project so they remove the "hacks". 
> Elasticsearch had
> similar problems with Amazon's AWS API. The worked around with a funny hack 
> in their SecurityPolicy
> (https://github.com/elastic/elasticsearch/pull/13538). But as Solr does not 
> run with SecurityManager
> in production, there is no way to do that. 
> We should report issue on the MiniKdc project, so they fix their code and 
> remove the really bad reflection on Java's internal classes.
> FYI, my 
> [conclusion|http://mail-archives.apache.org/mod_mbox/lucene-dev/201509.mbox/%3C014801d0ee23%245c8f5df0%2415ae19d0%24%40thetaphi.de%3E]
>  from yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7858) Make Angular UI default

2015-09-14 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7858:

Attachment: original UI link.png

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Attachments: original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7858) Make Angular UI default

2015-09-14 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7858:

Attachment: new ui link.png

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Attachments: new ui link.png, original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6805) Add a general purpose readonly interace to BitSet

2015-09-14 Thread Selva Kumar (JIRA)
Selva Kumar created LUCENE-6805:
---

 Summary: Add a general purpose readonly interace to BitSet
 Key: LUCENE-6805
 URL: https://issues.apache.org/jira/browse/LUCENE-6805
 Project: Lucene - Core
  Issue Type: Wish
Affects Versions: 5.2, 5.0
Reporter: Selva Kumar
Priority: Minor


BitSet has many more readonly methods compared to Bits. Similarly, BitSet has 
many more write methods compared to MutableBits.

This Jira issue is to add a new ImmutableBits interface to BitSet that includes 
all ready only methods of BitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7858) Make Angular UI default

2015-09-14 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7858:

Attachment: SOLR-7858.patch

patch that adds simple links to old/new UI.

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7858.patch, new ui link.png, original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7858) Make Angular UI default

2015-09-14 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743281#comment-14743281
 ] 

Upayavira commented on SOLR-7858:
-

I'm pretty much ready to switch to the Angular UI for 5.4. With the attached 
patch, I'll add links between the old and new UIs in case people find issues in 
the new one. I've added an (i) information link that will, I'm planning, link 
to a wiki page (not confluence) describing the change between them, asking for 
people to report issues in the new one.

I am also planning that SOLR-4388 be completed for 5.4, at least a first pass.

I'd appreciate a few votes for this ticket - to tell me that some people have 
actually played with the new UI rather than just liking the idea!!

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7858.patch, new ui link.png, original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6716) Improve SpanPayloadCheckQuery API

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743174#comment-14743174
 ] 

ASF subversion and git services commented on LUCENE-6716:
-

Commit 1702872 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1702872 ]

LUCENE-6716: Change SpanPayloadCheckQuery to take List

> Improve SpanPayloadCheckQuery API
> -
>
> Key: LUCENE-6716
> URL: https://issues.apache.org/jira/browse/LUCENE-6716
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6716.patch
>
>
> SpanPayloadCheckQuery currently takes a Collection to check its 
> payloads against.  This is suboptimal a) because payloads internally use 
> BytesRef rather than byte[] and b) Collection is unordered, but the 
> implementation does actually care about the order in which the payloads 
> appear.
> We should change the constructor to take a List instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8051) Global stats NPE

2015-09-14 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-8051:
---

 Summary: Global stats NPE
 Key: SOLR-8051
 URL: https://issues.apache.org/jira/browse/SOLR-8051
 Project: Solr
  Issue Type: Bug
Reporter: Markus Jelsma


NPE is thrown when not all cores are up.

{code}
null:java.lang.NullPointerException
at 
org.apache.solr.search.stats.ExactStatsCache.mergeToGlobalStats(ExactStatsCache.java:103)
at 
org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:846)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:758)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:410)
at 
io.openindex.solr.handler.SitesearchSearchHandler.handleRequestBody(SitesearchSearchHandler.java:43)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8052) Kerberos auth plugin does not work with Java 9 Jigsaw

2015-09-14 Thread Uwe Schindler (JIRA)
Uwe Schindler created SOLR-8052:
---

 Summary: Kerberos auth plugin does not work with Java 9 Jigsaw
 Key: SOLR-8052
 URL: https://issues.apache.org/jira/browse/SOLR-8052
 Project: Solr
  Issue Type: Bug
  Components: hentication
Affects Versions: 5.3
Reporter: Uwe Schindler


As described in my status update yesterday, there are some problems in 
dependencies shipped with Solr that don't work with Java 9 Jigsaw builds.

org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

{noformat}
   [junit4]> Throwable #1: java.lang.RuntimeException: 
java.lang.IllegalAccessException: Class org.apache.hadoop.minikdc.MiniKdc can 
not access a member of class sun.security.krb5.Config (module 
java.security.jgss) with modifiers "public static", module java.security.jgss 
does not export sun.security.krb5 to 
   [junit4]>at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:211)
   [junit4]>at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:81)
   [junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
   [junit4]> Caused by: java.lang.IllegalAccessException: Class 
org.apache.hadoop.minikdc.MiniKdc can not access a member of class 
sun.security.krb5.Config (module java.security.jgss) with modifiers "public 
static", module java.security.jgss does not export sun.security.krb5 to 

   [junit4]>at 
java.lang.reflect.AccessibleObject.slowCheckMemberAccess(java.base@9.0/AccessibleObject.java:384)
   [junit4]>at 
java.lang.reflect.AccessibleObject.checkAccess(java.base@9.0/AccessibleObject.java:376)
   [junit4]>at 
org.apache.hadoop.minikdc.MiniKdc.initKDCServer(MiniKdc.java:478)
   [junit4]>at 
org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:320)
   [junit4]>at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:204)
   [junit4]>... 38 moreThrowable #2: java.lang.NullPointerException
   [junit4]>at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.shutdown(ZkTestServer.java:334)
   [junit4]>at 
org.apache.solr.cloud.ZkTestServer.shutdown(ZkTestServer.java:526)
   [junit4]>at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.shutdown(SaslZkACLProviderTest.java:218)
   [junit4]>at 
org.apache.solr.cloud.SaslZkACLProviderTest.tearDown(SaslZkACLProviderTest.java:116)
   [junit4]>at java.lang.Thread.run(java.base@9.0/Thread.java:746)
{noformat}

This is really bad, bad, bad! All security related stuff should never ever be 
reflected on!
So we have to open issue in MiniKdc project so they remove the "hacks". 
Elasticsearch had
similar problems with Amazon's AWS API. The worked around with a funny hack in 
their SecurityPolicy
(https://github.com/elastic/elasticsearch/pull/13538). But as Solr does not run 
with SecurityManager
in production, there is no way to do that. 

We should report issue on the MiniKdc project, so they fix their code and 
remove the really bad reflection on Java's internal classes.

FYI, my 
[conclusion|http://mail-archives.apache.org/mod_mbox/lucene-dev/201509.mbox/%3C014801d0ee23%245c8f5df0%2415ae19d0%24%40thetaphi.de%3E]
 from yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14208 - Failure!

2015-09-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14208/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([DE9B608DC3C722C7:79DFD829AE7C317E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationAfterPeerSync(CdcrReplicationHandlerTest.java:158)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6716) Improve SpanPayloadCheckQuery API

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743198#comment-14743198
 ] 

ASF subversion and git services commented on LUCENE-6716:
-

Commit 1702877 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1702877 ]

LUCENE-6716: Better toString() for SpanPayloadCheckQuery

> Improve SpanPayloadCheckQuery API
> -
>
> Key: LUCENE-6716
> URL: https://issues.apache.org/jira/browse/LUCENE-6716
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6716.patch
>
>
> SpanPayloadCheckQuery currently takes a Collection to check its 
> payloads against.  This is suboptimal a) because payloads internally use 
> BytesRef rather than byte[] and b) Collection is unordered, but the 
> implementation does actually care about the order in which the payloads 
> appear.
> We should change the constructor to take a List instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6716) Improve SpanPayloadCheckQuery API

2015-09-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743293#comment-14743293
 ] 

ASF subversion and git services commented on LUCENE-6716:
-

Commit 1702892 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1702892 ]

LUCENE-6716: Change SpanPayloadCheckQuery to take List

> Improve SpanPayloadCheckQuery API
> -
>
> Key: LUCENE-6716
> URL: https://issues.apache.org/jira/browse/LUCENE-6716
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6716.patch
>
>
> SpanPayloadCheckQuery currently takes a Collection to check its 
> payloads against.  This is suboptimal a) because payloads internally use 
> BytesRef rather than byte[] and b) Collection is unordered, but the 
> implementation does actually care about the order in which the payloads 
> appear.
> We should change the constructor to take a List instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6716) Improve SpanPayloadCheckQuery API

2015-09-14 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-6716.
---
Resolution: Fixed

Thanks for the review David.  PayloadSpanCollector will be dealt with in 
LUCENE-6489, which I'm working on now.

> Improve SpanPayloadCheckQuery API
> -
>
> Key: LUCENE-6716
> URL: https://issues.apache.org/jira/browse/LUCENE-6716
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6716.patch
>
>
> SpanPayloadCheckQuery currently takes a Collection to check its 
> payloads against.  This is suboptimal a) because payloads internally use 
> BytesRef rather than byte[] and b) Collection is unordered, but the 
> implementation does actually care about the order in which the payloads 
> appear.
> We should change the constructor to take a List instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6489) Move span payloads to sandbox

2015-09-14 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6489:
--
Attachment: LUCENE-6489.patch

Here is a patch that:
* moves PayloadScoreQuery, SpanPayloadCheckQuery, and the various 
PayloadFunction implementations to the queries module under 
org.apache.lucene.queries.payloads
* moves PayloadSpanUtil and SpanPayloadCollector to sandbox

> Move span payloads to sandbox
> -
>
> Key: LUCENE-6489
> URL: https://issues.apache.org/jira/browse/LUCENE-6489
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6489.patch
>
>
> As mentioned on LUCENE-6371:
> {noformat}
> I've marked the new classes and methods as lucene.experimental, rather than 
> moving to the sandbox - if anyone feels strongly about that, maybe it could 
> be done in a follow up issue.
> {noformat}
> I feel strongly about this and will do the move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org