Re: Pylucene release

2012-11-13 Thread Andi Vajda


 Hi Shawn,

On Tue, 13 Nov 2012, Shawn Grant wrote:

Hi Andi, I was just wondering if Pylucene is on its usual schedule to release 
4-6 weeks after Lucene.  I didn't see any discussion of it on the mailing 
list or elsewhere.  I'm looking forward to 4.0!


Normally, PyLucene is released a few days after a Lucene release but 4.0 has 
seen so many API changes and removals that all tests and samples need to be 
ported to the new API. Last week-end, I ported a few but lots remain to be.


If no one helps, it either means that no one cares enough or that everyone 
is willing to be patient :-)


The PyLucene trunk svn repository is currently tracking the Lucene Core 4.x 
branch and you're welcome to use it out of svn. In the ten or so unit tests 
I ported so far, I didn't find any issues with PyLucene proper (or JCC). All 
changes were due to the tests being out of date or using deprecated APIs now 
removed. You might find that PyLucene out-of-trunk is quite usable.


If people want to help with porting PyLucene unit tests, the ones under its 
'test' directory not yet ported, feel free to ask questions here.

The gist of it is:
  - fix the imports (look at the first few tests for example,
alphabetically)
  - fix the tests to pass by looking at the original Java tests for changes
as most of these tests were originally ported from Java Lucene.

Once you're familiar with the new APIs, porting the sample code in samples 
and in LuceneInAction should fairly straightforward. It's just that there is 
a lot to port.


Andi..


[jira] [Updated] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Vadim Kirilchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Kirilchuk updated SOLR-2141:
--

Attachment: SOLR-2141-test.patch

Hi,

Here is the patch to existing test that will help to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:74)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:430)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:511)

 NullPointerException when using escapeSql function
 --

 Key: SOLR-2141
 URL: https://issues.apache.org/jira/browse/SOLR-2141
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1
 Environment: openjdk 1.6.0 b12
Reporter: Edward Rudd
Assignee: Koji Sekiguchi
 Attachments: dih-config.xml, dih-file.xml, SOLR-2141-sample.patch, 
 SOLR-2141-test.patch


 I have two entities defined, nested in each other..
 entity name=article query=select category, subcategory from articles
entity name=other query=select other from othertable where 
 category='${dataimporter.functions.escapeSql(article.category)}'
   AND 
 subcategory='${dataimporter.functions.escapeSql(article.subcategory)}'  
/entity
 /entity
 Now, when I run that it bombs on any article where subcategory = '' (it's a 
 NOT NULL column so empty string is there)  If i do where subcategory!='' in 
 the article query it works fine (aside from not pulling in all of the 
 articles).
 org.apache.solr.handler.dataimport.DataImportHandlerException: 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$1.evaluate(EvaluatorBag.java:75)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:216)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:204)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:107)
  

[jira] [Comment Edited] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496033#comment-13496033
 ] 

Vadim Kirilchuk edited comment on SOLR-2141 at 11/13/12 8:07 AM:
-

Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:74)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:430)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:511)

  was (Author: vkirilchuk):
Hi,

Here is the patch to existing test that will help to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 

[jira] [Comment Edited] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496033#comment-13496033
 ] 

Vadim Kirilchuk edited comment on SOLR-2141 at 11/13/12 8:15 AM:
-

Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:74)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:430)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:511)

P.s my solr is branch_4x 7da8d768b8b08923b274fcf32ca28edd1a78e8eb

  was (Author: vkirilchuk):
Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
 

[jira] [Comment Edited] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496033#comment-13496033
 ] 

Vadim Kirilchuk edited comment on SOLR-2141 at 11/13/12 8:16 AM:
-

Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue. (I`d like someone to write another test instead of 
modifying existing one)

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:74)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:430)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:511)

P.s my solr is branch_4x 7da8d768b8b08923b274fcf32ca28edd1a78e8eb

  was (Author: vkirilchuk):
Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue.

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 

[jira] [Comment Edited] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496033#comment-13496033
 ] 

Vadim Kirilchuk edited comment on SOLR-2141 at 11/13/12 8:28 AM:
-

Hi,

I am attaching the patch 
([SOLR-2141-test.patch|https://issues.apache.org/jira/secure/attachment/12553286/SOLR-2141-test.patch])
 to existing test that will help to reproduce the issue. (I`d like someone to 
write another test instead of modifying existing one)

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:81)
at 
org.apache.solr.handler.dataimport.LineEntityProcessor.init(LineEntityProcessor.java:84)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:74)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:430)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:511)

P.s my solr is branch_4x 7da8d768b8b08923b274fcf32ca28edd1a78e8eb

  was (Author: vkirilchuk):
Hi,

I am attaching the patch (SOLR-2141-test.patch) to existing test that will help 
to reproduce the issue. (I`d like someone to write another test instead of 
modifying existing one)

If you execute this test you will see already mentioned stacktrace in console 
output:

Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 47 more
Caused by: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:556)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
... 49 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.escapeQueryChars(ClientUtils.java:206)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$3.evaluate(EvaluatorBag.java:101)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:223)
at 
org.apache.solr.handler.dataimport.EvaluatorBag$6.get(EvaluatorBag.java:1)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:113)
at 
org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
at 
org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
at 
org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:91)
   

[jira] [Resolved] (SOLR-4022) Allow sorting on ExternalFileFields

2012-11-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-4022.
-

Resolution: Fixed

Committed, trunk: 1408646,1408649; branch_4x: 1408655

 Allow sorting on ExternalFileFields
 ---

 Key: SOLR-4022
 URL: https://issues.apache.org/jira/browse/SOLR-4022
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4022.patch


 At the moment, you can't sort on ExternalFileFields, but plumbing this in 
 turns out to be pretty trivial.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Po Rui (JIRA)
Po Rui created SOLR-4071:


 Summary: CollectionsHandler.handleCreateAction() doesn't validate 
parameter count and type
 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0, 4.0-BETA
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0, 4.0-BETA


CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
type. numShards's type doesn't checked and parameter count maybe less than 
required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Po Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Po Rui updated SOLR-4071:
-

Attachment: SOLR-4071.patch

 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type
 -

 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0-BETA, 4.0

 Attachments: SOLR-4071.patch


 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type. numShards's type doesn't checked and parameter count maybe less than 
 required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Po Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496077#comment-13496077
 ] 

Po Rui edited comment on SOLR-4071 at 11/13/12 10:26 AM:
-

CollectionHandler validate parameter count and type
OverseerCollectionProcessor validate existent collection name and nonexistent 
config path

  was (Author: brui):
CollectionHandler validate parameter count and type
OverseerCollectionProcessor validate existed collection name and config path
  
 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type
 -

 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0-BETA, 4.0

 Attachments: SOLR-4071.patch


 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type. numShards's type doesn't checked and parameter count maybe less than 
 required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4031) Rare mixup of request content

2012-11-13 Thread Per Steffensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Per Steffensen updated SOLR-4031:
-

Attachment: SOLR-4031.patch

Here is the patch we did. But is sounds like you already did it. Our patch fits 
on top of Solr 4.0.0 (actually revision 1394844 from lucene_solr_4_0 branch)

  Rare mixup of request content
 --

 Key: SOLR-4031
 URL: https://issues.apache.org/jira/browse/SOLR-4031
 Project: Solr
  Issue Type: Bug
  Components: multicore, search, SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Yonik Seeley
  Labels: bug, data-integrity, mixup, request, security
 Fix For: 4.1

 Attachments: SOLR-4031.patch


 We are using Solr 4.0 and run intensive performance/data-integrity/endurance 
 tests on it. In very rare occasions the content of two concurrent requests to 
 Solr get mixed up. We have spent a lot of time narrowing down this issue and 
 found that it is a bug in Jetty 8.1.2. Therefore of course we have filed it 
 as a bug with Jetty.
 Official bugzilla: https://bugs.eclipse.org/bugs/show_bug.cgi?id=392936
 Mailing list thread: 
 http://dev.eclipse.org/mhonarc/lists/jetty-dev/threads.html#01530
 The reports to Jetty is very detailed so you can go and read about it there. 
 We have found that the problem seems to be solved in Jetty 8.1.7. Therefore 
 we are now running Solr 4.0 (plus our additional changes) on top of Jetty 
 8.1.7 instead of 8.1.2. You probably want to do the same upgrade on the 
 Apache side sometime soon.
 Alt least now you know what to tell people if the start complaining about 
 mixed up requests in Solr 4.0 - upgrade the Jetty underneath to 8.1.7 (or run 
 tomcat or something)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4070) OverSeerCollectionProcessor.createCollection() parameters issue.

2012-11-13 Thread Po Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Po Rui updated SOLR-4070:
-

Description: parameters doesn't validate. the same collectionName may be 
created more than once. check the existent collectionName and nonexistent 
config files before create a collection  (was: parameters doesn't validate. the 
same collectionName may be created more than once. need parameter validate.)

 OverSeerCollectionProcessor.createCollection()  parameters issue. 
 --

 Key: SOLR-4070
 URL: https://issues.apache.org/jira/browse/SOLR-4070
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
 Fix For: 4.0-BETA, 4.0

 Attachments: SOLR-4070.patch


 parameters doesn't validate. the same collectionName may be created more 
 than once. check the existent collectionName and nonexistent config files 
 before create a collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_09) - Build # 1590 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1590/
Java: 64bit/jdk1.7.0_09 -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount

Error Message:
count should be = maxMergeCount (= 3)

Stack Trace:
java.lang.IllegalArgumentException: count should be = maxMergeCount (= 3)
at 
__randomizedtesting.SeedInfo.seed([F2AF183E6E89DD89:25C8898DD248420F]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:90)
at 
org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount(TestConcurrentMergeScheduler.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 834 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestConcurrentMergeScheduler
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestConcurrentMergeScheduler -Dtests.method=testMaxMergeCount 
-Dtests.seed=F2AF183E6E89DD89 -Dtests.slow=true -Dtests.locale=th_TH 
-Dtests.timezone=Africa/Timbuktu -Dtests.file.encoding=Cp1252
[junit4:junit4] ERROR   0.01s | 

[jira] [Commented] (LUCENE-4555) Partial matches in DisjunctionIntervalQueries trip assertions when collected

2012-11-13 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496127#comment-13496127
 ] 

Alan Woodward commented on LUCENE-4555:
---

The TestOr* queries failing were due to bugs in the tests, of course...

The actual problem here is that DisjunctionIntervalIterator doesn't take 
snapshots of its intervals, so the iterator is already positioned when we 
collect.  I'll abstract out a SnapshotCollectingIntervalIterator, and both 
ConjunctionII and DisjunctionII can subclass from that.

 Partial matches in DisjunctionIntervalQueries trip assertions when collected
 

 Key: LUCENE-4555
 URL: https://issues.apache.org/jira/browse/LUCENE-4555
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: core/search
Reporter: Alan Woodward
Priority: Minor
 Fix For: Positions Branch


 See, eg, all the TestOr* tests in TestBasicIntervals.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #153: POMs out of sync

2012-11-13 Thread Steve Rowe
I committed a fix to the Maven configuration: Derby is now a DIH test dependency

On Nov 13, 2012, at 5:31 AM, Apache Jenkins Server jenk...@builds.apache.org 
wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/153/
 
 9 tests failed.
 FAILED:  
 org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
 
 Error Message:
 org.apache.derby.jdbc.EmbeddedDriver
 
 Stack Trace:
 java.lang.ClassNotFoundException: org.apache.derby.jdbc.EmbeddedDriver


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3926) solrj should support better way of finding active sorts

2012-11-13 Thread Eirik Lygre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496137#comment-13496137
 ] 

Eirik Lygre commented on SOLR-3926:
---

So, the patch solves my problem :-), and I'm not sure what the other use cases 
really are, so i'll be looking at input from others ([~yo...@apache.org], 
[~otis]) in order to understand the requirements. My Solr skills do not extend 
to functions, augmenters, etc -- at least not yet!

Also, to aid the discussion, this is what we got today:

* SolrQuery stores the sort field as a comma-separated string of field 
direction
* SolrQuery.getSortField() returns the full string, e.g. price asc, date desc, 
qty desc
* SolrQuery.getSortFields() yields [ price asc,  date desc,  qty desc ], 
including extraneous whitespace

Can you provide a couple of examples of how that would/should work with the 
api, extending my examples below. Then, I'll see if I feel qualified, and if 
you guys promise to guide with qa, I'll do my best.

{code}
q.addSortField(price, SolrQuery.ORDER.asc);
q.addSortField(date, SolrQuery.ORDER.desc);
q.addSortField(qty, SolrQuery.ORDER.desc);
q.removeSortField(date, SolrQuery.ORDER.desc);
{code}


 solrj should support better way of finding active sorts
 ---

 Key: SOLR-3926
 URL: https://issues.apache.org/jira/browse/SOLR-3926
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Eirik Lygre
Priority: Minor
 Fix For: 4.1

 Attachments: SOLR-3926.patch


 The Solrj api uses ortogonal concepts for setting/removing and getting sort 
 information. Setting/removing uses a combination of (name,order), while 
 getters return a String name order:
 {code}
 public SolrQuery setSortField(String field, ORDER order);
 public SolrQuery addSortField(String field, ORDER order);
 public SolrQuery removeSortField(String field, ORDER order);
 public String[] getSortFields();
 public String getSortField();
 {code}
 If you want to use the current sort information to present a list of active 
 sorts, with the possibility to remove then, you need to manually parse the 
 string(s) returned from getSortFields, to recreate the information required 
 by removeSortField(). Not difficult, but not convenient either :-)
 Therefore this suggestion: Add a new method {{public MapString,ORDER 
 getSortFieldMap();}} which returns an ordered map of active sort fields. This 
 will make introspection of the current sort setup much easier.
 {code}
   public MapString, ORDER getSortFieldMap() {
 String[] actualSortFields = getSortFields();
 if (actualSortFields == null || actualSortFields.length == 0)
   return Collections.emptyMap();
 MapString, ORDER sortFieldMap = new LinkedHashMapString, ORDER();
 for (String sortField : actualSortFields) {
   String[] fieldSpec = sortField.trim().split( );
   sortFieldMap.put(fieldSpec[0], ORDER.valueOf(fieldSpec[1]));
 }
 return Collections.unmodifiableMap(sortFieldMap);
   }
 {code}
 For what it's worth, this is possible client code:
 {code}
 System.out.println(Active sorts);
 MapString, ORDER fieldMap = getSortFieldMap(query);
 for (String field : fieldMap.keySet()) {
System.out.println(-  + field + ; dir= + fieldMap.get(field));
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3926) solrj should support better way of finding active sorts

2012-11-13 Thread Eirik Lygre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496137#comment-13496137
 ] 

Eirik Lygre edited comment on SOLR-3926 at 11/13/12 12:36 PM:
--

So, the patch solves my problem :-), and I'm not sure what the other use cases 
really are, so i'll be looking at input from others ([~yo...@apache.org], 
[~otis]?) in order to understand the requirements. My Solr skills do not extend 
to functions, augmenters, etc -- at least not yet!

Also, to aid the discussion, this is what we got today:

* SolrQuery stores the sort field as a comma-separated string of field 
direction
* SolrQuery.getSortField() returns the full string, e.g. price asc, date desc, 
qty desc
* SolrQuery.getSortFields() yields [ price asc,  date desc,  qty desc ], 
including extraneous whitespace

Can you provide a couple of examples of how that would/should work with the 
api, extending my examples below. Then, I'll see if I feel qualified, and if 
you guys promise to guide with qa, I'll do my best.

{code}
q.addSortField(price, SolrQuery.ORDER.asc);
q.addSortField(date, SolrQuery.ORDER.desc);
q.addSortField(qty, SolrQuery.ORDER.desc);
q.removeSortField(date, SolrQuery.ORDER.desc);
{code}


  was (Author: elygre):
So, the patch solves my problem :-), and I'm not sure what the other use 
cases really are, so i'll be looking at input from others ([~yo...@apache.org], 
[~otis]) in order to understand the requirements. My Solr skills do not extend 
to functions, augmenters, etc -- at least not yet!

Also, to aid the discussion, this is what we got today:

* SolrQuery stores the sort field as a comma-separated string of field 
direction
* SolrQuery.getSortField() returns the full string, e.g. price asc, date desc, 
qty desc
* SolrQuery.getSortFields() yields [ price asc,  date desc,  qty desc ], 
including extraneous whitespace

Can you provide a couple of examples of how that would/should work with the 
api, extending my examples below. Then, I'll see if I feel qualified, and if 
you guys promise to guide with qa, I'll do my best.

{code}
q.addSortField(price, SolrQuery.ORDER.asc);
q.addSortField(date, SolrQuery.ORDER.desc);
q.addSortField(qty, SolrQuery.ORDER.desc);
q.removeSortField(date, SolrQuery.ORDER.desc);
{code}

  
 solrj should support better way of finding active sorts
 ---

 Key: SOLR-3926
 URL: https://issues.apache.org/jira/browse/SOLR-3926
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Eirik Lygre
Priority: Minor
 Fix For: 4.1

 Attachments: SOLR-3926.patch


 The Solrj api uses ortogonal concepts for setting/removing and getting sort 
 information. Setting/removing uses a combination of (name,order), while 
 getters return a String name order:
 {code}
 public SolrQuery setSortField(String field, ORDER order);
 public SolrQuery addSortField(String field, ORDER order);
 public SolrQuery removeSortField(String field, ORDER order);
 public String[] getSortFields();
 public String getSortField();
 {code}
 If you want to use the current sort information to present a list of active 
 sorts, with the possibility to remove then, you need to manually parse the 
 string(s) returned from getSortFields, to recreate the information required 
 by removeSortField(). Not difficult, but not convenient either :-)
 Therefore this suggestion: Add a new method {{public MapString,ORDER 
 getSortFieldMap();}} which returns an ordered map of active sort fields. This 
 will make introspection of the current sort setup much easier.
 {code}
   public MapString, ORDER getSortFieldMap() {
 String[] actualSortFields = getSortFields();
 if (actualSortFields == null || actualSortFields.length == 0)
   return Collections.emptyMap();
 MapString, ORDER sortFieldMap = new LinkedHashMapString, ORDER();
 for (String sortField : actualSortFields) {
   String[] fieldSpec = sortField.trim().split( );
   sortFieldMap.put(fieldSpec[0], ORDER.valueOf(fieldSpec[1]));
 }
 return Collections.unmodifiableMap(sortFieldMap);
   }
 {code}
 For what it's worth, this is possible client code:
 {code}
 System.out.println(Active sorts);
 MapString, ORDER fieldMap = getSortFieldMap(query);
 for (String field : fieldMap.keySet()) {
System.out.println(-  + field + ; dir= + fieldMap.get(field));
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Commented] (SOLR-4007) Morfologik dictionaries not available in Solr field type

2012-11-13 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496143#comment-13496143
 ] 

Markus Jelsma commented on SOLR-4007:
-

Although changes.txt mentions this for trunk, i'm still getting it despite 
having the three morfologik jars in the lib dir and both analysis extras jars.

 Morfologik dictionaries not available in Solr field type
 

 Key: SOLR-4007
 URL: https://issues.apache.org/jira/browse/SOLR-4007
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.1
Reporter: Lance Norskog
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.1


 The Polish Morfologik type does not find its dictionaries when used in Solr. 
 To demonstrate:
 1) Add this to example/solr/collection1/conf/schema.xml:
 {noformat}
 !-- Polish --
 fieldType name=text_pl class=solr.TextField 
 positionIncrementGap=100
   analyzer
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.MorfologikFilterFactory dictionary=MORFOLOGIK 
 /
   /analyzer
 /fieldType
 {noformat}
 2) Add this to example/solr/collection1/conf/solrconfig.xml:
 {noformat}
   lib dir=../../../../lucene/build/analysis/morfologik/ regex=.*\.jar /
   lib dir=../../../contrib/analysis-extras/lib regex=.*\.jar /
   lib dir=../../../dist/ regex=apache-solr-analysis-extras-\d.*\.jar /
 {noformat}
 3) Test 'text_pl' in the analysis page. You will get an exception.
 {noformat}
 Oct 28, 2012 8:27:19 PM org.apache.solr.core.SolrCore execute
 INFO: [collection1] webapp=/solr path=/analysis/field 
 params={analysis.showmatch=trueanalysis.query=wt=jsonanalysis.fieldvalue=blah+blahanalysis.fieldtype=text_pl}
  status=500 QTime=26 
 Oct 28, 2012 8:27:19 PM org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.RuntimeException: Default dictionary resource for 
 language 'plnot found.
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:163)
   at morfologik.stemming.PolishStemmer.init(PolishStemmer.java:64)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilter.init(MorfologikFilter.java:70)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilterFactory.create(MorfologikFilterFactory.java:63)
   at 
 org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeValue(AnalysisRequestHandlerBase.java:125)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.analyzeValues(FieldAnalysisRequestHandler.java:220)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.handleAnalysisRequest(FieldAnalysisRequestHandler.java:181)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.doAnalysis(FieldAnalysisRequestHandler.java:100)
   at 
 [...]
 Caused by: java.io.IOException: Could not locate resource: 
 morfologik/dictionaries/pl.dict
   at morfologik.util.ResourceUtils.openInputStream(ResourceUtils.java:56)
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:156)
   ... 38 more
 {noformat}
 {{morfologik-polish-1.5.3.jar}} has {{morfologik/dictionaries/pl.dict}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1306) Support pluggable persistence/loading of solr.xml details

2012-11-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496145#comment-13496145
 ] 

Erick Erickson commented on SOLR-1306:
--

I'm thinking of committing this this weekend (to trunk, not 4.x yet) unless 
people object. I want to write a stress test and bang away at this thing first, 
and reconcile the CoreDescriptorProvider I came up with with the one already in 
there for Zookeeper.

Let me know
Erick

 Support pluggable persistence/loading of solr.xml details
 -

 Key: SOLR-1306
 URL: https://issues.apache.org/jira/browse/SOLR-1306
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
Assignee: Erick Erickson
 Fix For: 4.1

 Attachments: SOLR-1306.patch, SOLR-1306.patch, SOLR-1306.patch, 
 SOLR-1306.patch


 Persisting and loading details from one xml is fine if the no:of cores are 
 small and the no:of cores are few/fixed . If there are 10's of thousands of 
 cores in a single box adding a new core (with persistent=true) becomes very 
 expensive because every core creation has to write this huge xml. 
 Moreover , there is a good chance that the file gets corrupted and all the 
 cores become unusable . In that case I would prefer it to be stored in a 
 centralized DB which is backed up/replicated and all the information is 
 available in a centralized location. 
 We may need to refactor CoreContainer to have a pluggable implementation 
 which can load/persist the details . The default implementation should 
 write/read from/to solr.xml . And the class should be pluggable as follows in 
 solr.xml
 {code:xml}
 solr
   dataProvider class=com.foo.FooDataProvider attr1=val1 attr2=val2/
 /solr
 {code}
 There will be a new interface (or abstract class ) called SolrDataProvider 
 which this class must implement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4007) Morfologik dictionaries not available in Solr field type

2012-11-13 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496154#comment-13496154
 ] 

Dawid Weiss commented on SOLR-4007:
---

Hi Markus. I believe the fix was all right -- maybe it's something else. Can 
you provide a repeatable scenario of this failure?

 Morfologik dictionaries not available in Solr field type
 

 Key: SOLR-4007
 URL: https://issues.apache.org/jira/browse/SOLR-4007
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.1
Reporter: Lance Norskog
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.1


 The Polish Morfologik type does not find its dictionaries when used in Solr. 
 To demonstrate:
 1) Add this to example/solr/collection1/conf/schema.xml:
 {noformat}
 !-- Polish --
 fieldType name=text_pl class=solr.TextField 
 positionIncrementGap=100
   analyzer
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.MorfologikFilterFactory dictionary=MORFOLOGIK 
 /
   /analyzer
 /fieldType
 {noformat}
 2) Add this to example/solr/collection1/conf/solrconfig.xml:
 {noformat}
   lib dir=../../../../lucene/build/analysis/morfologik/ regex=.*\.jar /
   lib dir=../../../contrib/analysis-extras/lib regex=.*\.jar /
   lib dir=../../../dist/ regex=apache-solr-analysis-extras-\d.*\.jar /
 {noformat}
 3) Test 'text_pl' in the analysis page. You will get an exception.
 {noformat}
 Oct 28, 2012 8:27:19 PM org.apache.solr.core.SolrCore execute
 INFO: [collection1] webapp=/solr path=/analysis/field 
 params={analysis.showmatch=trueanalysis.query=wt=jsonanalysis.fieldvalue=blah+blahanalysis.fieldtype=text_pl}
  status=500 QTime=26 
 Oct 28, 2012 8:27:19 PM org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.RuntimeException: Default dictionary resource for 
 language 'plnot found.
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:163)
   at morfologik.stemming.PolishStemmer.init(PolishStemmer.java:64)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilter.init(MorfologikFilter.java:70)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilterFactory.create(MorfologikFilterFactory.java:63)
   at 
 org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeValue(AnalysisRequestHandlerBase.java:125)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.analyzeValues(FieldAnalysisRequestHandler.java:220)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.handleAnalysisRequest(FieldAnalysisRequestHandler.java:181)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.doAnalysis(FieldAnalysisRequestHandler.java:100)
   at 
 [...]
 Caused by: java.io.IOException: Could not locate resource: 
 morfologik/dictionaries/pl.dict
   at morfologik.util.ResourceUtils.openInputStream(ResourceUtils.java:56)
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:156)
   ... 38 more
 {noformat}
 {{morfologik-polish-1.5.3.jar}} has {{morfologik/dictionaries/pl.dict}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Dominik Siebel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominik Siebel updated SOLR-2141:
-

Attachment: SOLR-2141.b341f5b.patch

 NullPointerException when using escapeSql function
 --

 Key: SOLR-2141
 URL: https://issues.apache.org/jira/browse/SOLR-2141
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1
 Environment: openjdk 1.6.0 b12
Reporter: Edward Rudd
Assignee: Koji Sekiguchi
 Attachments: dih-config.xml, dih-file.xml, SOLR-2141.b341f5b.patch, 
 SOLR-2141-sample.patch, SOLR-2141-test.patch


 I have two entities defined, nested in each other..
 entity name=article query=select category, subcategory from articles
entity name=other query=select other from othertable where 
 category='${dataimporter.functions.escapeSql(article.category)}'
   AND 
 subcategory='${dataimporter.functions.escapeSql(article.subcategory)}'  
/entity
 /entity
 Now, when I run that it bombs on any article where subcategory = '' (it's a 
 NOT NULL column so empty string is there)  If i do where subcategory!='' in 
 the article query it works fine (aside from not pulling in all of the 
 articles).
 org.apache.solr.handler.dataimport.DataImportHandlerException: 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$1.evaluate(EvaluatorBag.java:75)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:216)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:204)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:107)
 at 
 org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
 at 
 org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:87)
 at 
 org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:71)
 at 
 org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:237)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:357)
 ... 6 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Dominik Siebel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496160#comment-13496160
 ] 

Dominik Siebel commented on SOLR-2141:
--

I just added a path for commit b341f5b (github mirror of branch_4x).
Might this have caused the error?
The way I see it the EvaluatorBag looses the current context when used in a 
nested entity.
The commit was executed in connection with SOLR-2542 as git log shows:

b341f5b - SOLR-2542: Fixed DIH Context variables which were broken for all 
scopes other then SCOPE_ENTITY (10 months ago) Chris M. Hostetter

 NullPointerException when using escapeSql function
 --

 Key: SOLR-2141
 URL: https://issues.apache.org/jira/browse/SOLR-2141
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1
 Environment: openjdk 1.6.0 b12
Reporter: Edward Rudd
Assignee: Koji Sekiguchi
 Attachments: dih-config.xml, dih-file.xml, SOLR-2141.b341f5b.patch, 
 SOLR-2141-sample.patch, SOLR-2141-test.patch


 I have two entities defined, nested in each other..
 entity name=article query=select category, subcategory from articles
entity name=other query=select other from othertable where 
 category='${dataimporter.functions.escapeSql(article.category)}'
   AND 
 subcategory='${dataimporter.functions.escapeSql(article.subcategory)}'  
/entity
 /entity
 Now, when I run that it bombs on any article where subcategory = '' (it's a 
 NOT NULL column so empty string is there)  If i do where subcategory!='' in 
 the article query it works fine (aside from not pulling in all of the 
 articles).
 org.apache.solr.handler.dataimport.DataImportHandlerException: 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$1.evaluate(EvaluatorBag.java:75)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:216)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:204)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:107)
 at 
 org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
 at 
 org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:87)
 at 
 org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:71)
 at 
 org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:237)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:357)
 ... 6 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2141) NullPointerException when using escapeSql function

2012-11-13 Thread Dominik Siebel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496160#comment-13496160
 ] 

Dominik Siebel edited comment on SOLR-2141 at 11/13/12 12:56 PM:
-

I just added the patch SOLR-2141.b341f5b.patch for commit b341f5b (github 
mirror of branch_4x).
Might this have caused the error?
The way I see it the EvaluatorBag looses the current context when used in a 
nested entity.
The commit was executed in connection with SOLR-2542 as git log shows:

b341f5b - SOLR-2542: Fixed DIH Context variables which were broken for all 
scopes other then SCOPE_ENTITY (10 months ago) Chris M. Hostetter

  was (Author: dsiebel):
I just added a path for commit b341f5b (github mirror of branch_4x).
Might this have caused the error?
The way I see it the EvaluatorBag looses the current context when used in a 
nested entity.
The commit was executed in connection with SOLR-2542 as git log shows:

b341f5b - SOLR-2542: Fixed DIH Context variables which were broken for all 
scopes other then SCOPE_ENTITY (10 months ago) Chris M. Hostetter
  
 NullPointerException when using escapeSql function
 --

 Key: SOLR-2141
 URL: https://issues.apache.org/jira/browse/SOLR-2141
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1
 Environment: openjdk 1.6.0 b12
Reporter: Edward Rudd
Assignee: Koji Sekiguchi
 Attachments: dih-config.xml, dih-file.xml, SOLR-2141.b341f5b.patch, 
 SOLR-2141-sample.patch, SOLR-2141-test.patch


 I have two entities defined, nested in each other..
 entity name=article query=select category, subcategory from articles
entity name=other query=select other from othertable where 
 category='${dataimporter.functions.escapeSql(article.category)}'
   AND 
 subcategory='${dataimporter.functions.escapeSql(article.subcategory)}'  
/entity
 /entity
 Now, when I run that it bombs on any article where subcategory = '' (it's a 
 NOT NULL column so empty string is there)  If i do where subcategory!='' in 
 the article query it works fine (aside from not pulling in all of the 
 articles).
 org.apache.solr.handler.dataimport.DataImportHandlerException: 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$1.evaluate(EvaluatorBag.java:75)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:216)
 at 
 org.apache.solr.handler.dataimport.EvaluatorBag$5.get(EvaluatorBag.java:204)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.resolve(VariableResolverImpl.java:107)
 at 
 org.apache.solr.handler.dataimport.TemplateString.fillTokens(TemplateString.java:81)
 at 
 org.apache.solr.handler.dataimport.TemplateString.replaceTokens(TemplateString.java:75)
 at 
 org.apache.solr.handler.dataimport.VariableResolverImpl.replaceTokens(VariableResolverImpl.java:87)
 at 
 org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:71)
 at 
 org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:237)
 at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:357)
 ... 6 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.6.0_37) - Build # 2436 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/2436/
Java: 32bit/jdk1.6.0_37 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 18997 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:62: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:524: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1962:
 java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:570)
at 
com.sun.net.ssl.internal.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:141)
at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:388)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:523)
at sun.net.www.protocol.https.HttpsClient.init(HttpsClient.java:272)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:133)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 17 minutes 51 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.6.0_37 -server -XX:+UseSerialGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_09) - Build # 1596 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1596/
Java: 32bit/jdk1.7.0_09 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 19690 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:62: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:524: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1962:
 java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at 
java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
at sun.net.www.protocol.https.HttpsClient.init(HttpsClient.java:270)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:327)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 24 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_09 -client -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b58) - Build # 2427 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/2427/
Java: 32bit/jdk1.8.0-ea-b58 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 17850 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene3x...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40.values...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene41...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.8.0-ea
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/docs/core/help-doc.html...
  [javadoc] 1 warning

[...truncated 44 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene.analysis.ar...
  [javadoc] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
  [javadoc] Loading source files for package org.apache.lucene.analysis.bg...
  [javadoc] Loading source files for package org.apache.lucene.analysis.br...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ca...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.charfilter...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cjk...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cn...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.commongrams...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound.hyphenation...
  [javadoc] Loading source files for package org.apache.lucene.analysis.core...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cz...
  [javadoc] Loading source files for package org.apache.lucene.analysis.da...
  [javadoc] Loading source files for package org.apache.lucene.analysis.de...
  [javadoc] Loading source files for package org.apache.lucene.analysis.el...
  [javadoc] Loading source files for package org.apache.lucene.analysis.en...
  [javadoc] Loading source files for package org.apache.lucene.analysis.es...
  [javadoc] Loading source files for package org.apache.lucene.analysis.eu...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fa...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fr...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ga...
  [javadoc] Loading source files for package org.apache.lucene.analysis.gl...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hu...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.hunspell...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hy...
  [javadoc] Loading source files for package org.apache.lucene.analysis.id...
  [javadoc] Loading source files for package org.apache.lucene.analysis.in...
  [javadoc] Loading source files for package org.apache.lucene.analysis.it...
  [javadoc] Loading source files 

[jira] [Commented] (SOLR-4007) Morfologik dictionaries not available in Solr field type

2012-11-13 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496183#comment-13496183
 ] 

Markus Jelsma commented on SOLR-4007:
-

David, the fix is alright indeed, it appears i had a stale jar hanging around.
Thanks

 Morfologik dictionaries not available in Solr field type
 

 Key: SOLR-4007
 URL: https://issues.apache.org/jira/browse/SOLR-4007
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.1
Reporter: Lance Norskog
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.1


 The Polish Morfologik type does not find its dictionaries when used in Solr. 
 To demonstrate:
 1) Add this to example/solr/collection1/conf/schema.xml:
 {noformat}
 !-- Polish --
 fieldType name=text_pl class=solr.TextField 
 positionIncrementGap=100
   analyzer
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.MorfologikFilterFactory dictionary=MORFOLOGIK 
 /
   /analyzer
 /fieldType
 {noformat}
 2) Add this to example/solr/collection1/conf/solrconfig.xml:
 {noformat}
   lib dir=../../../../lucene/build/analysis/morfologik/ regex=.*\.jar /
   lib dir=../../../contrib/analysis-extras/lib regex=.*\.jar /
   lib dir=../../../dist/ regex=apache-solr-analysis-extras-\d.*\.jar /
 {noformat}
 3) Test 'text_pl' in the analysis page. You will get an exception.
 {noformat}
 Oct 28, 2012 8:27:19 PM org.apache.solr.core.SolrCore execute
 INFO: [collection1] webapp=/solr path=/analysis/field 
 params={analysis.showmatch=trueanalysis.query=wt=jsonanalysis.fieldvalue=blah+blahanalysis.fieldtype=text_pl}
  status=500 QTime=26 
 Oct 28, 2012 8:27:19 PM org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.RuntimeException: Default dictionary resource for 
 language 'plnot found.
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:163)
   at morfologik.stemming.PolishStemmer.init(PolishStemmer.java:64)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilter.init(MorfologikFilter.java:70)
   at 
 org.apache.lucene.analysis.morfologik.MorfologikFilterFactory.create(MorfologikFilterFactory.java:63)
   at 
 org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeValue(AnalysisRequestHandlerBase.java:125)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.analyzeValues(FieldAnalysisRequestHandler.java:220)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.handleAnalysisRequest(FieldAnalysisRequestHandler.java:181)
   at 
 org.apache.solr.handler.FieldAnalysisRequestHandler.doAnalysis(FieldAnalysisRequestHandler.java:100)
   at 
 [...]
 Caused by: java.io.IOException: Could not locate resource: 
 morfologik/dictionaries/pl.dict
   at morfologik.util.ResourceUtils.openInputStream(ResourceUtils.java:56)
   at morfologik.stemming.Dictionary.getForLanguage(Dictionary.java:156)
   ... 38 more
 {noformat}
 {{morfologik-polish-1.5.3.jar}} has {{morfologik/dictionaries/pl.dict}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3178) Versioning - optimistic locking

2012-11-13 Thread Per Steffensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Per Steffensen updated SOLR-3178:
-

Attachment: SOLR-3173_3178_3382_3428_plus.patch

Updated patch SOLR-3173_3178_3382_3428_plus.patch to fit on top of Solr 4.0.0. 
Actually updated to fit on top of revision 1394844 of branch lucene_solr_4_0, 
but I believe this is the same as Solr 4.0.0.

 Versioning - optimistic locking
 ---

 Key: SOLR-3178
 URL: https://issues.apache.org/jira/browse/SOLR-3178
 Project: Solr
  Issue Type: New Feature
  Components: update
Affects Versions: 3.5
 Environment: All
Reporter: Per Steffensen
Assignee: Per Steffensen
  Labels: RDBMS, insert, locking, nosql, optimistic, uniqueKey, 
 update, versioning
 Fix For: 4.1

 Attachments: SOLR-3173_3178_3382_3428_plus.patch, 
 SOLR-3173_3178_3382_3428_plus.patch, SOLR_3173_3178_3382_plus.patch, 
 SOLR-3178.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 In order increase the ability of Solr to be used as a NoSql database (lots of 
 concurrent inserts, updates, deletes and queries in the entire lifetime of 
 the index) instead of just a search index (first: everything indexed (in one 
 thread), after: only queries), I would like Solr to support versioning to be 
 used for optimistic locking.
 When my intent (see SOLR-3173) is to update an existing document, I will need 
 to provide a version-number equal to the version number I got when I fetched 
 the existing document for update plus one. If this provided version-number 
 does not correspond to the newest version-number of that document at the 
 time of update plus one, I will get a VersionConflict error. If it does 
 correspond the document will be updated with the new one, so that the newest 
 version-number of that document is NOW one higher than before the update. 
 Correct but efficient concurrency handling.
 When my intent (see SOLR-3173) is to insert a new document, the version 
 number provided will not be used - instead a version-number 0 will be used. 
 According to SOLR-3173 insert will only succeed if a document with the same 
 value on uniqueKey-field does not already exist.
 In general when talking about different versions of the same document, of 
 course we need to be able to identify when a document is the same - that, 
 per definition, is when the values of the uniqueKey-fields are equal. 
 The functionality provided by this issue is only really meaningfull when you 
 run with updateLog activated.
 This issue might be solved more or less at the same time as SOLR-3173, and 
 only one single SVN patch might be given to cover both issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_09) - Build # 1592 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1592/
Java: 64bit/jdk1.7.0_09 -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 19516 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:62: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build.xml:523: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1961:
 java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at 
java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
at sun.net.www.protocol.https.HttpsClient.init(HttpsClient.java:270)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:327)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 25 minutes 17 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 64bit/jdk1.7.0_09 -XX:+UseSerialGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_09) - Build # 2437 - Still Failing!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/2437/
Java: 64bit/jdk1.7.0_09 -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 19687 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:62: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:524: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1962:
 java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
at sun.net.www.protocol.https.HttpsClient.init(HttpsClient.java:270)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:327)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 17 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 64bit/jdk1.7.0_09 -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_09) - Build # 1590 - Failure!

2012-11-13 Thread Michael McCandless
Woops, I forgot to merge this fix back from trunk yesterday ... I just
did now ...

Mike McCandless

http://blog.mikemccandless.com


On Tue, Nov 13, 2012 at 6:28 AM, Policeman Jenkins Server
jenk...@sd-datasolutions.de wrote:
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1590/
 Java: 64bit/jdk1.7.0_09 -XX:+UseSerialGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount

 Error Message:
 count should be = maxMergeCount (= 3)

 Stack Trace:
 java.lang.IllegalArgumentException: count should be = maxMergeCount (= 3)
 at 
 __randomizedtesting.SeedInfo.seed([F2AF183E6E89DD89:25C8898DD248420F]:0)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:90)
 at 
 org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount(TestConcurrentMergeScheduler.java:303)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:722)




 Build Log:
 [...truncated 834 lines...]
 [junit4:junit4] Suite: org.apache.lucene.index.TestConcurrentMergeScheduler
 

Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_09) - Build # 1582 - Failure!

2012-11-13 Thread Michael McCandless
Yeah sorry you have to have = 2 (maybe 4) cores in your machine to
reproduce this!  (Because ConcurrentMergeScheduler sets default
maxMergeCount/Threads based on this).

I backported the fix ...

Mike McCandless

http://blog.mikemccandless.com

On Mon, Nov 12, 2012 at 11:23 PM, Robert Muir rcm...@gmail.com wrote:
 I can't reproduce this, but I think the fix is to just merge mike's
 test fix (r1408251) ?

 On Mon, Nov 12, 2012 at 11:17 PM, Policeman Jenkins Server
 jenk...@sd-datasolutions.de wrote:
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1582/
 Java: 32bit/jdk1.7.0_09 -client -XX:+UseParallelGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount

 Error Message:
 count should be = maxMergeCount (= 3)

 Stack Trace:
 java.lang.IllegalArgumentException: count should be = maxMergeCount (= 3)
 at 
 __randomizedtesting.SeedInfo.seed([F94951BDB9CDA2D0:2E2EC00E050C3D56]:0)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:90)
 at 
 org.apache.lucene.index.TestConcurrentMergeScheduler.testMaxMergeCount(TestConcurrentMergeScheduler.java:303)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Assigned] (SOLR-4070) OverSeerCollectionProcessor.createCollection() parameters issue.

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4070:
-

Assignee: Mark Miller

 OverSeerCollectionProcessor.createCollection()  parameters issue. 
 --

 Key: SOLR-4070
 URL: https://issues.apache.org/jira/browse/SOLR-4070
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4070.patch


 parameters doesn't validate. the same collectionName may be created more 
 than once. check the existent collectionName and nonexistent config files 
 before create a collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4070) OverSeerCollectionProcessor.createCollection() parameters issue.

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4070:
--

 Priority: Minor  (was: Major)
Fix Version/s: (was: 4.0)
   (was: 4.0-BETA)
   5.0
   4.1

 OverSeerCollectionProcessor.createCollection()  parameters issue. 
 --

 Key: SOLR-4070
 URL: https://issues.apache.org/jira/browse/SOLR-4070
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4070.patch


 parameters doesn't validate. the same collectionName may be created more 
 than once. check the existent collectionName and nonexistent config files 
 before create a collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4070) OverSeerCollectionProcessor.createCollection() parameters issue.

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496227#comment-13496227
 ] 

Mark Miller commented on SOLR-4070:
---

Thanks Po! I can write a test for this.

 OverSeerCollectionProcessor.createCollection()  parameters issue. 
 --

 Key: SOLR-4070
 URL: https://issues.apache.org/jira/browse/SOLR-4070
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
 Fix For: 4.1, 5.0

 Attachments: SOLR-4070.patch


 parameters doesn't validate. the same collectionName may be created more 
 than once. check the existent collectionName and nonexistent config files 
 before create a collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4069) ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in clusterstate

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4069:
-

Assignee: Mark Miller

 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate
 --

 Key: SOLR-4069
 URL: https://issues.apache.org/jira/browse/SOLR-4069
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4069.patch


 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4069) ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in clusterstate

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4069:
--

Fix Version/s: (was: 4.0)
   (was: 4.0-BETA)
   (was: 4.0-ALPHA)
   5.0
   4.1

 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate
 --

 Key: SOLR-4069
 URL: https://issues.apache.org/jira/browse/SOLR-4069
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4069.patch


 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4556) FuzzyTermsEnum creates tons of objects

2012-11-13 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-4556:
---

 Summary: FuzzyTermsEnum creates tons of objects
 Key: LUCENE-4556
 URL: https://issues.apache.org/jira/browse/LUCENE-4556
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search, modules/spellchecker
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Critical
 Fix For: 4.1, 5.0


I ran into this problem in production using the DirectSpellchecker. The number 
of objects created by the spellchecker shoot through the roof very very 
quickly. We ran about 130 queries and ended up with  2M transitions / states. 
We spend 50% of the time in GC just because of transitions. Other parts of the 
system behave just fine here.

I talked quickly to robert and gave a POC a shot providing a 
LevenshteinAutomaton#toRunAutomaton(prefix, n) method to optimize this case and 
build a array based strucuture converted into UTF-8 directly instead of going 
through the object based APIs. This involved quite a bit of changes but they 
are all package private at this point. I have a patch that still has a fair set 
of nocommits but its shows that its possible and IMO worth the trouble to make 
this really useable in production. All tests pass with the patch - its a 
start

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4071:
-

Assignee: Mark Miller

 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type
 -

 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: SOLR-4071.patch


 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type. numShards's type doesn't checked and parameter count maybe less than 
 required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4071:
--

Fix Version/s: (was: 4.0)
   (was: 4.0-BETA)
   5.0
   4.1

 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type
 -

 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: SOLR-4071.patch


 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type. numShards's type doesn't checked and parameter count maybe less than 
 required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3989) RuntimeException thrown by SolrZkClient should wrap cause, have a message, or be SolrException

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3989:
--

Fix Version/s: 5.0
 Assignee: Yonik Seeley

 RuntimeException thrown by SolrZkClient should wrap cause, have a message, or 
 be SolrException
 --

 Key: SOLR-3989
 URL: https://issues.apache.org/jira/browse/SOLR-3989
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Colin Bartolome
Assignee: Yonik Seeley
 Fix For: 4.1, 5.0


 In a few spots, but notably in the constructor for SolrZkClient, a try-catch 
 block will catch Throwable and throw a new RuntimeException with no cause or 
 message. Either the RuntimeException should wrap the Throwable that was 
 caught, some sort of message should be added, or the type of the exception 
 should be changed to SolrException so calling code can catch these exceptions 
 without casting too broad of a net.
 Reproduce this by creating a CloudSolrServer that points to a URL that is 
 valid, but has no server running:
 CloudSolrServer server = new CloudSolrServer(localhost:9983);
 server.connect();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4556) FuzzyTermsEnum creates tons of objects

2012-11-13 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4556:


Attachment: LUCENE-4556.patch

here is a patch ...scary™

 FuzzyTermsEnum creates tons of objects
 --

 Key: LUCENE-4556
 URL: https://issues.apache.org/jira/browse/LUCENE-4556
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search, modules/spellchecker
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4556.patch


 I ran into this problem in production using the DirectSpellchecker. The 
 number of objects created by the spellchecker shoot through the roof very 
 very quickly. We ran about 130 queries and ended up with  2M transitions / 
 states. We spend 50% of the time in GC just because of transitions. Other 
 parts of the system behave just fine here.
 I talked quickly to robert and gave a POC a shot providing a 
 LevenshteinAutomaton#toRunAutomaton(prefix, n) method to optimize this case 
 and build a array based strucuture converted into UTF-8 directly instead of 
 going through the object based APIs. This involved quite a bit of changes but 
 they are all package private at this point. I have a patch that still has a 
 fair set of nocommits but its shows that its possible and IMO worth the 
 trouble to make this really useable in production. All tests pass with the 
 patch - its a start

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[Heads-up] Index file format change on trunk

2012-11-13 Thread Adrien Grand
Hi,

I just committed LUCENE-4509 [1] which changes the default
StoredFieldsFormat on trunk (backport to come on branch 4.x in a few
minutes). You should reindex.

 [1] https://issues.apache.org/jira/browse/LUCENE-4509

-- 
Adrien


[jira] [Commented] (SOLR-1306) Support pluggable persistence/loading of solr.xml details

2012-11-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496241#comment-13496241
 ] 

Yonik Seeley commented on SOLR-1306:


At first blush, this seems to go in the wrong direction.
Rather than keep meta-data about a core/directory further away from the actual 
index for that directory, it seems like we should move it closer (i.e. the 
meta-data for collection1 should be kept under the collection1 directory or 
even the collection1/data directory).

Wouldn't it be nice to be able to back up a collection/shard by simply copying 
a single directory?
This applies to cloud too - it seems like info about the shard / collection the 
index belongs to should ride around next to the index.
One should be able to bring down two solr servers, move a directory from one 
server to another, then start back up and have everything just work.


 Support pluggable persistence/loading of solr.xml details
 -

 Key: SOLR-1306
 URL: https://issues.apache.org/jira/browse/SOLR-1306
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
Assignee: Erick Erickson
 Fix For: 4.1

 Attachments: SOLR-1306.patch, SOLR-1306.patch, SOLR-1306.patch, 
 SOLR-1306.patch


 Persisting and loading details from one xml is fine if the no:of cores are 
 small and the no:of cores are few/fixed . If there are 10's of thousands of 
 cores in a single box adding a new core (with persistent=true) becomes very 
 expensive because every core creation has to write this huge xml. 
 Moreover , there is a good chance that the file gets corrupted and all the 
 cores become unusable . In that case I would prefer it to be stored in a 
 centralized DB which is backed up/replicated and all the information is 
 available in a centralized location. 
 We may need to refactor CoreContainer to have a pluggable implementation 
 which can load/persist the details . The default implementation should 
 write/read from/to solr.xml . And the class should be pluggable as follows in 
 solr.xml
 {code:xml}
 solr
   dataProvider class=com.foo.FooDataProvider attr1=val1 attr2=val2/
 /solr
 {code}
 There will be a new interface (or abstract class ) called SolrDataProvider 
 which this class must implement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Heads-up] Index file format change on trunk

2012-11-13 Thread Jack Krupansky
Hmmm... does this mean that 4.1 would REQUIRE reindexing?!?! If so, that would 
strongly argue for there to be a 4.0.1 release that has both bug fixes and the 
incremental feature additions of the 4x branch but DOESN’T require a reindex.

Who made this decision that it was A-OK for 4.1 to require a reindex? I don’t 
recall seeing any discussion here about it.

-- Jack Krupansky

From: Adrien Grand 
Sent: Tuesday, November 13, 2012 7:00 AM
To: dev@lucene.apache.org 
Subject: [Heads-up] Index file format change on trunk

Hi,

I just committed LUCENE-4509 [1] which changes the default StoredFieldsFormat 
on trunk (backport to come on branch 4.x in a few minutes). You should reindex.


[1] https://issues.apache.org/jira/browse/LUCENE-4509

-- 
Adrien


RE: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #153: POMs out of sync

2012-11-13 Thread Dyer, James
Thank you!

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311


-Original Message-
From: Steve Rowe [mailto:sar...@gmail.com] 
Sent: Tuesday, November 13, 2012 6:07 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #153: POMs out of sync

I committed a fix to the Maven configuration: Derby is now a DIH test dependency

On Nov 13, 2012, at 5:31 AM, Apache Jenkins Server jenk...@builds.apache.org 
wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/153/
 
 9 tests failed.
 FAILED:  
 org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
 
 Error Message:
 org.apache.derby.jdbc.EmbeddedDriver
 
 Stack Trace:
 java.lang.ClassNotFoundException: org.apache.derby.jdbc.EmbeddedDriver


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Heads-up] Index file format change on trunk

2012-11-13 Thread Adrien Grand
Hi Jack,

On Tue, Nov 13, 2012 at 4:06 PM, Jack Krupansky j...@basetechnology.comwrote:

   Hmmm... does this mean that 4.1 would REQUIRE reindexing?


No it doesn't. This is a notice for people who would already use the
(unreleased) Lucene41Codec for some test indexes to tell them that they
should reindex if they want their code to keep working with the latest
trunk revision.

-- 
Adrien


[jira] [Updated] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-11-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-4061:


Attachment: SOLR-4061.patch

Attached proposed solution. The configuration is uploaded from the node serving 
the request. Even there is a problem with the collection creation, the 
configuration files may have been uploaded anyway

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-11-13 Thread Eric Pugh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496265#comment-13496265
 ] 

Eric Pugh commented on SOLR-4061:
-

Tomas,

Have you given any thought to being able to select a directory of configuration 
on your local box and upload that directly into ZK as part of a create?  
SOLR-4052 was my first cut at it, but unfortunately you can only upload files, 
you can't seem to select a directory and upload everything in it into ZK.

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-11-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-4509.
--

   Resolution: Fixed
Fix Version/s: 4.1

Thanks Robert for your comments, I replaced documents with individual 
documents and added a link to the protobuf docs.

Committed:
 - trunk  r1408762
 - branch 4.x r1408796

 Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
 --

 Key: LUCENE-4509
 URL: https://issues.apache.org/jira/browse/LUCENE-4509
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/store
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.1

 Attachments: LUCENE-4509.patch, LUCENE-4509.patch


 What would you think of making CompressingStoredFieldsFormat the new default 
 StoredFieldsFormat?
 Stored fields compression has many benefits :
  - it makes the I/O cache work for us,
  - file-based index replication/backup becomes cheaper.
 Things to know:
  - even with incompressible data, there is less than 0.5% overhead with LZ4,
  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
 ~ 256kB,
  - LZ4 uncompression has almost no memory overhead,
  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
 I think we could use the same default parameters as in CompressingCodec :
  - LZ4 compression,
  - in-memory stored fields index that is very memory-efficient (less than 12 
 bytes per block of compressed docs) and uses binary search to locate 
 documents in the fields data file,
  - 16 kB blocks (small enough so that there is no major slow down when the 
 whole index would fit into the I/O cache anyway, and large enough to provide 
 interesting compression ratios ; for example Robert got a 0.35 compression 
 ratio with the geonames.org database).
 Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496270#comment-13496270
 ] 

Mark Miller commented on SOLR-4061:
---

I think most tools that let you upload directories end up using flash or some 
other hack unfortunately - at least when its a GUI selector.

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3927) Ability to use CompressingStoredFieldsFormat

2012-11-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved SOLR-3927.


Resolution: Not A Problem

CompressingStoredFieldsFormat is now the default stored fields format, see 
LUCENE-4509.

 Ability to use CompressingStoredFieldsFormat
 

 Key: SOLR-3927
 URL: https://issues.apache.org/jira/browse/SOLR-3927
 Project: Solr
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial

 It would be nice to let Solr users use {{CompressingStoredFieldsFormat}} to 
 compress their stored fields (with warnings given that this feature is 
 experimental and that we don't guarantee backwards compat for it).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Heads-up] Index file format change on trunk

2012-11-13 Thread Jack Krupansky
Thank you for that clarification!

-- Jack Krupansky

From: Adrien Grand 
Sent: Tuesday, November 13, 2012 7:10 AM
To: dev@lucene.apache.org 
Subject: Re: [Heads-up] Index file format change on trunk

Hi Jack, 

On Tue, Nov 13, 2012 at 4:06 PM, Jack Krupansky j...@basetechnology.com wrote:

  Hmmm... does this mean that 4.1 would REQUIRE reindexing?

No it doesn't. This is a notice for people who would already use the 
(unreleased) Lucene41Codec for some test indexes to tell them that they should 
reindex if they want their code to keep working with the latest trunk revision.

-- 
Adrien


[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_09) - Build # 1599 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1599/
Java: 64bit/jdk1.7.0_09 -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 20071 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] 
 [exec] 
build/docs/core/org/apache/lucene/codecs\lucene41/Lucene41StoredFieldsFormat.html
 [exec]   missing Constructors: Lucene41StoredFieldsFormat()
 [exec] 
 [exec] Missing javadocs were found!

BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:62: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:281: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1944:
 exec returned: 1

Total time: 28 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 64bit/jdk1.7.0_09 -XX:+UseG1GC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496289#comment-13496289
 ] 

Mark Miller commented on SOLR-4061:
---

Patch looks good.

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3854) SolrCloud does not work with https

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3854:
--

Fix Version/s: 5.0

this one ready Sami?

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Sami Siren
 Fix For: 4.1, 5.0

 Attachments: SOLR-3854.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3507) Refactor parts of solr doing inter node communication to use shardhandlerfactory/shardhandler

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3507:
--

Fix Version/s: 5.0
   4.1

 Refactor parts of solr doing inter node communication to use 
 shardhandlerfactory/shardhandler
 -

 Key: SOLR-3507
 URL: https://issues.apache.org/jira/browse/SOLR-3507
 Project: Solr
  Issue Type: Improvement
Reporter: Sami Siren
Assignee: Sami Siren
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-3507.patch, SOLR-3507.patch, SOLR-3507.patch


 Sequal to SOLR-3480, the aim is to change most (all?) parts of solr that need 
 to talk to different nodes to use ShardHandlerFacory from corecontainer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_09) - Build # 1599 - Failure!

2012-11-13 Thread Adrien Grand
I committed a fix.

-- 
Adrien


[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2012-11-13 Thread Sami Siren (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496294#comment-13496294
 ] 

Sami Siren commented on SOLR-3854:
--

I was planning to add some tests too, but got distracted with something else. 
I'll try to get to this again in near future.

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Sami Siren
 Fix For: 4.1, 5.0

 Attachments: SOLR-3854.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4052) Upload files to ZooKeeper from Solr Admin interface

2012-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496295#comment-13496295
 ] 

Tomás Fernández Löbbe commented on SOLR-4052:
-

Maybe taking a zip/tar file? It may also be useful to have a way to download a 
zip/tar file of a collection from the UI.

 Upload files to ZooKeeper from Solr Admin interface
 ---

 Key: SOLR-4052
 URL: https://issues.apache.org/jira/browse/SOLR-4052
 Project: Solr
  Issue Type: Improvement
Reporter: Eric Pugh
 Attachments: zookeeper_edit.patch, ZookeeperInfoServletTest.java


 It would be nice if you could add files to ZooKeeper through the solr admin 
 tool instead of having to use the zkCli.  Steffan and I talked about this at 
 ApacheCon Euro, and he suggested that if I put the java code in place, he'll 
 put in the pretty GUI aspects!  This patch is based around using a tool like 
 http://blueimp.github.com/jQuery-File-Upload/ to upload to a java servlet.  I 
 hung this code off the ZookeeperInfoServlet doPost method mostly b/c I didn't 
 have a better sense of where it should go.   A *very* annoying thing is that 
 it seems like from the browser side you can't select a directory of files and 
 upload it, which would make loading a new solr core configuration split 
 across many directory VERY annoying.   Also, this doesn't really feel like a 
 solid solution to just pulling up a file in the ZK tree browser webpage, 
 editing it (maybe via a big text box) and then posting the contents back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr FieldType support for multiple values

2012-11-13 Thread David Smiley (@MITRE.org)
I'm working with Lucene 4's DocValues in a Solr app, but using it to store
multiple values encoded into a byte array.  Unfortunately, Solr's
DocumentBuilder code passes each value individually to the FieldType calling
createField() for each value, and so the FieldType never sees all the values
for a given document at once.  My short-term solution that avoids hacking
Solr is to have a special UpdateRequestProcessor that will combine the
values into a special object that the FieldType will get.  It's inelegant.

Instead, perhaps have a method on FieldType like isMultiValueAware() to
indicate that the value parameter to createField(s) is to be a Collection
when multiple values are passed.  Just an idea, there are other ways to do
it.

Comments?

~ David



-
 Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-FieldType-support-for-multiple-values-tp4020106.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4069) ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in clusterstate

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496304#comment-13496304
 ] 

Mark Miller commented on SOLR-4069:
---

Is that really necessary though? If we are rejoining the election, I think we 
never would have registered as the leader and so there should be nothing new in 
the cluster state we should have to clear?

 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate
 --

 Key: SOLR-4069
 URL: https://issues.apache.org/jira/browse/SOLR-4069
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4069.patch


 ShardLeaderElectionContext.rejoinLeaderElection() doesn't clear the leader in 
 clusterstate

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4071) CollectionsHandler.handleCreateAction() doesn't validate parameter count and type

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496305#comment-13496305
 ] 

Mark Miller commented on SOLR-4071:
---

This is likely a general issue with most of the 'admin' type apis - something 
we should improve though.

 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type
 -

 Key: SOLR-4071
 URL: https://issues.apache.org/jira/browse/SOLR-4071
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: SOLR-4071.patch


 CollectionsHandler.handleCreateAction() doesn't validate parameter count and 
 type. numShards's type doesn't checked and parameter count maybe less than 
 required 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4066) SolrZKClient changed interface

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496307#comment-13496307
 ] 

Mark Miller commented on SOLR-4066:
---

Looks like this is a dupe of SOLR-3989? And yonik just committed a fix that 
simply adds the exception to the runtimeexception.

Any particular reason we should be more granular here Trym?

 SolrZKClient changed interface
 --

 Key: SOLR-4066
 URL: https://issues.apache.org/jira/browse/SOLR-4066
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0, 4.0.1, 4.1
 Environment: Any
Reporter: Trym Møller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4066.patch


 The constructor of SolrZKClient has changed, I expect to ensure clean up of 
 resources. The strategy is as follows:
 {code}
 connManager = new ConnectionManager(...)
 try {
...
 } catch (Throwable e) {
   connManager.close();
   throw new RuntimeException();
 }
 try {
   connManager.waitForConnected(clientConnectTimeout);
 } catch (Throwable e) {
   connManager.close();
   throw new RuntimeException();
 }
 {code}
 This results in a different exception (RuntimeException) returned from the 
 constructor as earlier (nice exceptions as UnknownHostException, 
 TimeoutException).
 Can this be changed so we keep the old nice exceptions e.g. as follows 
 (requiring the constructor to declare these) or at least include them as 
 cause in the RuntimeException?
 {code}
 boolean closeBecauseOfException = true;
 try {
 ...
connManager.waitForConnected(clientConnectTimeout);
closeBecauseOfException = false
 } finally {
 if (closeBecauseOfException) {
 connManager.close();
 }
 } 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4043) Add ability to get success/failure responses from Collections API.

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4043:
--

Issue Type: Improvement  (was: Bug)
   Summary: Add ability to get success/failure responses from Collections 
API.  (was: It isn't right response for create/delete/reload collections )

 Add ability to get success/failure responses from Collections API.
 --

 Key: SOLR-4043
 URL: https://issues.apache.org/jira/browse/SOLR-4043
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
 Environment: Solr cloud cluster
Reporter: Raintung Li
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: patch-4043.txt


 The create/delete/reload collections are asynchronous process, the client 
 can't get the right response, only make sure the information have been saved 
 into the OverseerCollectionQueue. The client will get the response directly 
 that don't wait the result of behavior(create/delete/reload collection) 
 whatever successful. 
 The easy solution is client wait until the asynchronous process success, the 
 create/delete/reload collection thread will save the response into 
 OverseerCollectionQueue, then notify client to get the response. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4003) The SolrZKClient clean method should not try and clear zk paths that start with /zookeeper

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4003:
--

Summary: The SolrZKClient clean method should not try and clear zk paths 
that start with /zookeeper  (was: The zkcli cmd line tool should not try and 
clear zk paths that start with /zookeeper)

 The SolrZKClient clean method should not try and clear zk paths that start 
 with /zookeeper
 --

 Key: SOLR-4003
 URL: https://issues.apache.org/jira/browse/SOLR-4003
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0


 These will fail and stop the removal of other nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4556) FuzzyTermsEnum creates tons of objects

2012-11-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496315#comment-13496315
 ] 

Michael McCandless commented on LUCENE-4556:


I hit this test failure:
{noformat}
ant test  -Dtestcase=TestSlowFuzzyQuery2 -Dtests.method=testFromTestData 
-Dtests.seed=6019B3869272BDF0 -Dtests.locale=el_CY -Dtests.timezone=Asia/Anadyr 
-Dtests.file.encoding=ANSI_X3.4-1968
{noformat}


 FuzzyTermsEnum creates tons of objects
 --

 Key: LUCENE-4556
 URL: https://issues.apache.org/jira/browse/LUCENE-4556
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search, modules/spellchecker
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4556.patch


 I ran into this problem in production using the DirectSpellchecker. The 
 number of objects created by the spellchecker shoot through the roof very 
 very quickly. We ran about 130 queries and ended up with  2M transitions / 
 states. We spend 50% of the time in GC just because of transitions. Other 
 parts of the system behave just fine here.
 I talked quickly to robert and gave a POC a shot providing a 
 LevenshteinAutomaton#toRunAutomaton(prefix, n) method to optimize this case 
 and build a array based strucuture converted into UTF-8 directly instead of 
 going through the object based APIs. This involved quite a bit of changes but 
 they are all package private at this point. I have a patch that still has a 
 fair set of nocommits but its shows that its possible and IMO worth the 
 trouble to make this really useable in production. All tests pass with the 
 patch - its a start

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4003) The SolrZKClient clean method should not try and clear zk paths that start with /zookeeper

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4003:
--

Attachment: SOLR-4003.patch

 The SolrZKClient clean method should not try and clear zk paths that start 
 with /zookeeper
 --

 Key: SOLR-4003
 URL: https://issues.apache.org/jira/browse/SOLR-4003
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4003.patch


 These will fail and stop the removal of other nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4003) The SolrZKClient clean method should not try and clear zk paths that start with /zookeeper

2012-11-13 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496335#comment-13496335
 ] 

Commit Tag Bot commented on SOLR-4003:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1408831

SOLR-4003: The SolrZKClient clean method should not try and clear zk paths that 
start with /zookeeper, as this can fail and stop the removal of further nodes. 
(Mark Miller)




 The SolrZKClient clean method should not try and clear zk paths that start 
 with /zookeeper
 --

 Key: SOLR-4003
 URL: https://issues.apache.org/jira/browse/SOLR-4003
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4003.patch


 These will fail and stop the removal of other nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-11-13 Thread Eric Pugh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496338#comment-13496338
 ] 

Eric Pugh commented on SOLR-4061:
-

What do we think about using Flash though to allow this to happen?  Requiring 
access to the filesystem that Solr is running on is can be pain.  For example, 
in our hack projects we put a single Solr up on Amazon EC2.  And I've used the 
zkcli client to upload configs to that shared Solr so I didn't need filesystem 
access.  

Maybe though I am haring off into the weeds of a ZK admin interface, not Solr 
specific enough.

One thought I have is what if you can supply a URI instead?  Use commons-vfs 
maybe, so you can pass in a 

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4555) Partial matches in DisjunctionIntervalQueries trip assertions when collected

2012-11-13 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496349#comment-13496349
 ] 

Alan Woodward commented on LUCENE-4555:
---

OK, the relevant test here is 
TestNestedIntervalFilterQueries.testOrNearNearQuery.  It seems as though the 
bug is actually in PositionFilterScorer (which should be called 
IntervalFilterScorer, but anyway).  Will try and chase it down tonight.

 Partial matches in DisjunctionIntervalQueries trip assertions when collected
 

 Key: LUCENE-4555
 URL: https://issues.apache.org/jira/browse/LUCENE-4555
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: core/search
Reporter: Alan Woodward
Priority: Minor
 Fix For: Positions Branch


 See, eg, all the TestOr* tests in TestBasicIntervals.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3866) CoreAdmin SWAP and RENAME need fixed/defined when using SolrCloud

2012-11-13 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496352#comment-13496352
 ] 

Ryan Josal commented on SOLR-3866:
--

The use case I (and it appears the author of the instigating thread for this 
issue: http://osdir.com/ml/solr-user.lucene.apache.org/2012-09/msg00893.html) 
have for CoreAdmin SWAP is to support a backup core (on deck core) that I can 
rebuild the index from scratch on.

 CoreAdmin SWAP and RENAME need fixed/defined when using SolrCloud
 -

 Key: SOLR-3866
 URL: https://issues.apache.org/jira/browse/SOLR-3866
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0
Reporter: Hoss Man

 We need to define what the expected behavior of using the CoreAdminHandler's 
 SWAP and RENAME commands is if you are running in SolrCloud mode.
 At the moment, it seems to introduce a disconnect between which collection 
 the SolrCore thinks it's a part of (and what appears in the persisted 
 solr.xml) vs what collection ZooKeeper thinks the SolrCore(s) that were 
 swaped/renamed are a part of.  We should either fix this, or document it if 
 it as actually consistent and intentional for low level controls, or disable 
 commands like SWAP/RENAME in SolrCloud mode
 https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201209.mbox/%3CCALB4QrP2GZAwLeAiy%3DfcmOLYbc5r0i9Tp1DquyPS8mMJPwCgnw%40mail.gmail.com%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4003) The SolrZKClient clean method should not try and clear zk paths that start with /zookeeper

2012-11-13 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496353#comment-13496353
 ] 

Commit Tag Bot commented on SOLR-4003:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1408838

SOLR-4003: The SolrZKClient clean method should not try and clear zk paths that 
start with /zookeeper, as this can fail and stop the removal of further nodes.




 The SolrZKClient clean method should not try and clear zk paths that start 
 with /zookeeper
 --

 Key: SOLR-4003
 URL: https://issues.apache.org/jira/browse/SOLR-4003
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4003.patch


 These will fail and stop the removal of other nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Apache Git mirror

2012-11-13 Thread Mark Miller
Or perhaps someone needs to optimize that repo? (-gc or something?)

It's so damn slow to pick up a couple updates, there is no way it
could have been this slow before without me noticing I think. It's
ridiculous - it may have been slow previously, but I don't think this
slow.

This happens over time to me on local repos - and then I
clean/optimize whatever it's called - and things speed up again.

Just a guess.

- Mark

On Sun, Nov 11, 2012 at 8:26 PM, Mark Miller markrmil...@gmail.com wrote:

 On Nov 11, 2012, at 4:08 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote:

 It's always been kind of slow for me,

 Okay - perhaps I just didn't notice before - but that would be odd. I never 
 worried much about it's speed, but I also have not used it in a while. This 
 is the first time I said jesus, this is slow. But who knows - perhaps I 
 just didn't pay attention in the past.

 - Mark



-- 
- Mark

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4003) The SolrZKClient clean method should not try and clear zk paths that start with /zookeeper

2012-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4003.
---

Resolution: Fixed

 The SolrZKClient clean method should not try and clear zk paths that start 
 with /zookeeper
 --

 Key: SOLR-4003
 URL: https://issues.apache.org/jira/browse/SOLR-4003
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4003.patch


 These will fail and stop the removal of other nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Optimize facets when actually single valued?

2012-11-13 Thread Ryan McKinley
If the only motivation for adding 'multiValued=flexible' is the response
format, what about just changing the response format version number and
writing the wrapping list based on that?

Allowing multiple values, but behaving like single value fields when only
one value exists would be a *huge* simplification for my app!

ryan




On Sun, Nov 11, 2012 at 7:09 AM, Yonik Seeley yo...@lucidworks.com wrote:

 On Sun, Nov 11, 2012 at 3:33 AM, Robert Muir rcm...@gmail.com wrote:
  I am guessing at times people are lazy about schema definition. But, I
 think
  with lucene 4 stats we can detect if a field is actually single valued...
  Something like terms.size == terms.doccount == terms.sumdocfreq. I have
 to
  think about it a bit, maybe its even simpler than this? Anyway, this
 couple
  be used instead of actual schema def to just build a fieldcache instead
 of
  uninverted field I think... Should be a simple opto but maybe potent...

 Funny you should mention this now - I was thinking exactly the same
 thing on the flight home from ApacheCon!

 This detect single-valued also has implications for things other
 than faceting as well - as you say, people can be lazy about the
 schema definition and having things just work is a good thing.

 I've thought about a more flexible field that acts like a single
 valued field when you use it like that, and a multi-valued field
 otherwise.  There won't quite be back compat with responses though
 (since multiValued fields with single values now look like
 foo:[single_value] instead of foo:single_value.)  Perhaps we
 could add something like multiValued=flexible or something (and switch
 to that by default), while retaining back compat for
 multiValued=true/false.  Either that or bump version of the schema
 or response.  This is actually pretty important if we ever want to do
 more schema-less (i.e. type guessing based on input), since it
 allows us to only guess type and not have to deal with figuring out
 multiValued.  It could lower the numer of dynamic field definitions
 necessary and make choosing the correct one simpler.

 -Yonik
 http://lucidworks.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-1306) Support pluggable persistence/loading of solr.xml details

2012-11-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496370#comment-13496370
 ] 

Erick Erickson commented on SOLR-1306:
--

Well, the use case here is explicitly that the core information is kept in a 
completely extra-solr repository (extra ZK too for that matter). Managing 100K 
cores by moving directories around is non-trivial, especially since there will 
probably be some system-of-record for where all the information lives anyway.

As it stands, this patch doesn't really affect the way Solr works OOB. It only 
comes into play if the people implementing the provider _require_ it (and want 
to implement the complexity).

But let me think about this a bit. Are you suggesting that the whole notion of 
solr.xml be replaced by some kind of crawl/discovery process? Off the top of my 
head, I can imagine a degenerate solr.xml that just lists one or more 
directories. Then the load process consists of crawling those directories 
looking for cores and loading them, possibly with some kind of configuration 
files at the core level. For the 10s of K cores/machine case we don't want to 
put the data in solrconfig.xml or anything like that, I'm thinking of something 
very much simpler, on the order of a java.properties file. I've skipped 
thinking about how to find a core or how that plays with using common schemas 
to see if this is along the lines you're thinking of getting meta-data closer 
to the index.

It does make the whole coordination issue a lot easier, though. You no longer 
have the loose coupling between having core information in solr.xml and then 
having to be sure the files/dirs corresponding to what's in solr.xml just 
happen to map to what's actually on disk Moving something from one place 
to another would consist of
1 shutting down the servers
2 moving the core directory from one server to another
3 starting up the servers again.

I can imagine doing this a bit differently...
1 copy the core from one server to another
2 issue an unload for the core on the source server
3 issue a create for the core on the dest server

There'd probably have to be some kind of background loading, but we're already 
talking about parallelizing multicore loads...

From an admin perspective, the poor soul trying to maintain this all could 
pretty easily enumerate where all the cores were just by asking each server 
for a list of where things are.

Anyway, is the in the vicinity of moving the metadata closer to the index?

 Support pluggable persistence/loading of solr.xml details
 -

 Key: SOLR-1306
 URL: https://issues.apache.org/jira/browse/SOLR-1306
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
Assignee: Erick Erickson
 Fix For: 4.1

 Attachments: SOLR-1306.patch, SOLR-1306.patch, SOLR-1306.patch, 
 SOLR-1306.patch


 Persisting and loading details from one xml is fine if the no:of cores are 
 small and the no:of cores are few/fixed . If there are 10's of thousands of 
 cores in a single box adding a new core (with persistent=true) becomes very 
 expensive because every core creation has to write this huge xml. 
 Moreover , there is a good chance that the file gets corrupted and all the 
 cores become unusable . In that case I would prefer it to be stored in a 
 centralized DB which is backed up/replicated and all the information is 
 available in a centralized location. 
 We may need to refactor CoreContainer to have a pluggable implementation 
 which can load/persist the details . The default implementation should 
 write/read from/to solr.xml . And the class should be pluggable as follows in 
 solr.xml
 {code:xml}
 solr
   dataProvider class=com.foo.FooDataProvider attr1=val1 attr2=val2/
 /solr
 {code}
 There will be a new interface (or abstract class ) called SolrDataProvider 
 which this class must implement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4051) DIH Delta updates do not work for all locales

2012-11-13 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-4051:
-

Attachment: SOLR-4051.patch

Improved patch with a unit test.  I plan on committing this shortly.

 DIH Delta updates do not work for all locales
 -

 Key: SOLR-4051
 URL: https://issues.apache.org/jira/browse/SOLR-4051
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.0
Reporter: James Dyer
Priority: Minor
 Attachments: SOLR-4051.patch, SOLR-4051.patch


 DIH Writes the last modified date to a Properties file using the default 
 locale.  This gets sent in plaintext to the database at the next delta 
 update.  DIH does not use prepared statements but just puts the date in an 
 SQL Statement in -mm-dd hh:mm:ss format.  It would probably be best to 
 always format this date in JDBC escape syntax 
 (http://docs.oracle.com/javase/1.4.2/docs/guide/jdbc/getstart/statement.html#999472)
  and java.sql.Timestamp#toString().  To do this, we'd need to parse the 
 user's query and remove the single quotes likely there (and now the quotes 
 would be optional and undesired).  
 It might just be simpler to change the SimpleDateFormat to use the root 
 locale as this appears to be the original intent here anyhow.  Affected 
 locales include ja_JP_JP , hi_IN , th_TH

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Commit-free ExternalFileField

2012-11-13 Thread Mikhail Khludnev
Community,
I have an implementation scratch of the subj (with great test coverage,
btw), but before attach it into jira for ages. I want to know your opinion:
- do you think it will be useful and widely adopted by many users?
- do you think it's necessary to provide consistencyatomicity for case
like: {!func}sum(foo_extf, foo_extf) - where same field is mentioned twice
in a query, first reference can be evaluated differently than second one if
field is reloaded during query parsing/evaluation

Looking forward for your feedback!
-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
 mkhlud...@griddynamics.com


Re: Optimize facets when actually single valued?

2012-11-13 Thread Yonik Seeley
On Tue, Nov 13, 2012 at 6:37 PM, Ryan McKinley ryan...@gmail.com wrote:
 If the only motivation for adding 'multiValued=flexible' is the response
 format, what about just changing the response format version number and
 writing the wrapping list based on that?

The original version of Solr (SOLAR when it was still inside CNET) did
this - a multiValued field with a single value was output as a singe
value, not an array containing a single value.  Some people wanted
more predictability (always an array or never an array).

-Yonik
http://lucidworks.com


 Allowing multiple values, but behaving like single value fields when only
 one value exists would be a *huge* simplification for my app!

 ryan

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4051) DIH Delta updates do not work for all locales

2012-11-13 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-4051.
--

Resolution: Fixed
  Assignee: James Dyer

committed.

Trunk: r1408873 / r1408880 (CHANGES.txt)
4x: r1408883

I will also update the wiki.

 DIH Delta updates do not work for all locales
 -

 Key: SOLR-4051
 URL: https://issues.apache.org/jira/browse/SOLR-4051
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.0
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Attachments: SOLR-4051.patch, SOLR-4051.patch


 DIH Writes the last modified date to a Properties file using the default 
 locale.  This gets sent in plaintext to the database at the next delta 
 update.  DIH does not use prepared statements but just puts the date in an 
 SQL Statement in -mm-dd hh:mm:ss format.  It would probably be best to 
 always format this date in JDBC escape syntax 
 (http://docs.oracle.com/javase/1.4.2/docs/guide/jdbc/getstart/statement.html#999472)
  and java.sql.Timestamp#toString().  To do this, we'd need to parse the 
 user's query and remove the single quotes likely there (and now the quotes 
 would be optional and undesired).  
 It might just be simpler to change the SimpleDateFormat to use the root 
 locale as this appears to be the original intent here anyhow.  Affected 
 locales include ja_JP_JP , hi_IN , th_TH

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1970) need to customize location of dataimport.properties

2012-11-13 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-1970.
--

   Resolution: Fixed
Fix Version/s: 5.0
   4.1
 Assignee: James Dyer

Fixed as part of SOLR-4051.  

This adds a propertyWriter / element to DIH's data-config.xml file, allowing 
the user to specify the location, filename and Locale for the 
data-config.properties file.  Alternatively, users can specify their own 
property writer implementation for greater control.

 need to customize location of dataimport.properties
 ---

 Key: SOLR-1970
 URL: https://issues.apache.org/jira/browse/SOLR-1970
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Chris Book
Assignee: James Dyer
 Fix For: 4.1, 5.0


 By default dataimport.properties is written to {solr.home}/conf/.  However 
 when using multiple solr cores, it is currently useful to use the same conf 
 directory for all of the cores and use solr.xml to specify a different 
 schema.xml.  I can then specify a different data-config.xml for each core to 
 define how the data gets from the database to each core's shema.
 However, all the solr cores will fight over writing to the 
 dataimport.properties file.  There should be an option in solrconfig.xml to 
 specify the location or name of this file so that a different one can be used 
 for each core.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2658) dataimport.properties : change datetime format

2012-11-13 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-2658.
--

   Resolution: Fixed
Fix Version/s: 5.0
   4.1
 Assignee: James Dyer

Fixed as part of SOLR-4051.  

This adds a propertyWriter / element to DIH's data-config.xml file, allowing 
the user to specify the location, filename and Locale for the 
data-config.properties file.  Alternatively, users can specify their own 
property writer implementation for greater control.

One nice thing that can be done with this:  you can use JDBC escape syntax by 
specifying {'ts' ''-MM-dd HH:mm:ss.SS''} as the dateFormat.

 dataimport.properties : change datetime format 
 ---

 Key: SOLR-2658
 URL: https://issues.apache.org/jira/browse/SOLR-2658
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1, 3.3
Reporter: Frédéric AUGUSTE
Assignee: James Dyer
Priority: Minor
 Fix For: 4.1, 5.0


 I have to use URLDataSource  in order to index an Atom Feed.
 The REST API provided specify a start time parameter formated in UTC-Z 
 (RFC3339).
 The dataimporter last index time parameter is not in this format.
 Is it possible to improve solr in order to specify the datetime format ?
 Class : org.apache.solr.handler.dataimport.DataImporter#DATE_TIME_FORMAT
 Current value :
   return new SimpleDateFormat(-MM-dd HH:mm:ss);
 Regards,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4556) FuzzyTermsEnum creates tons of objects

2012-11-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496439#comment-13496439
 ] 

Michael McCandless commented on LUCENE-4556:


What spooks me about this patch is this code (LevenshteinAutomaton) is already 
REALLY hairy ... and this change would add yet more hair to it (when really we 
need to be doing the reverse, so the code becomes approachable to new eyeballs).

Also: are we sure the objects created here are really such a heavy GC load...?

I ran a quick test, respelling (using DirectSpellChecker() w/ its defaults) a 
set of 500 5-character terms against the full Wikipedia English (33.M docs) 
index, using concurrent mark/sweep collector w/ 2 GB heap and I couldn't see 
any difference in the net throughput on a 24 core box ... both got ~780 
respells/sec.

Simon can you describe what use case you're seeing where GC is cutting 
throughput by 50%?

 FuzzyTermsEnum creates tons of objects
 --

 Key: LUCENE-4556
 URL: https://issues.apache.org/jira/browse/LUCENE-4556
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search, modules/spellchecker
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Critical
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4556.patch


 I ran into this problem in production using the DirectSpellchecker. The 
 number of objects created by the spellchecker shoot through the roof very 
 very quickly. We ran about 130 queries and ended up with  2M transitions / 
 states. We spend 50% of the time in GC just because of transitions. Other 
 parts of the system behave just fine here.
 I talked quickly to robert and gave a POC a shot providing a 
 LevenshteinAutomaton#toRunAutomaton(prefix, n) method to optimize this case 
 and build a array based strucuture converted into UTF-8 directly instead of 
 going through the object based APIs. This involved quite a bit of changes but 
 they are all package private at this point. I have a patch that still has a 
 fair set of nocommits but its shows that its possible and IMO worth the 
 trouble to make this really useable in production. All tests pass with the 
 patch - its a start

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1920) Need generic placemarker for DIH delta-import

2012-11-13 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496441#comment-13496441
 ] 

James Dyer commented on SOLR-1920:
--

Shawn,

Can you give more detail as to what kind of data you want to save in the 
properties file and where that data comes from?  

 Need generic placemarker for DIH delta-import
 -

 Key: SOLR-1920
 URL: https://issues.apache.org/jira/browse/SOLR-1920
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


 The dataimporthandler currently is only capable of saving the index timestamp 
 for later use in delta-import commands.  It should be extended to allow any 
 arbitrary data to be used as a placemarker for the next import.
 It is possible to use externally supplied variables in data-config.xml and 
 send values in via the URL that starts the import, but if the config can 
 support it natively, that is better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2790) DataImportHandler last_index_time does not update on delta-imports

2012-11-13 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-2790.
--

Resolution: Invalid

I agree with Chad:  DocBuilder does update dataimport.properties at the end of 
a delta-import.  SOLR-2551 added a warning message if the properties file is 
not writable.  SOLR-4051 makes it possible to configure the file location so 
non-writable locations can be easily worked around.  

 DataImportHandler last_index_time does not update on delta-imports
 --

 Key: SOLR-2790
 URL: https://issues.apache.org/jira/browse/SOLR-2790
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 3.4
 Environment: Windows 7, Java version 1.6.0_26
Reporter: Greg Martin
 Fix For: 3.4


 When a full-index is run using the DataImportHandler, the last_index_time is 
 updated.  But it is not updated when a delta-import is run.  Same issue 
 reported here: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201104.mbox/%3CBANLkTi=cunkz26aj8wcyfp7ujbjpnw6...@mail.gmail.com%3E
 Note that the DataImportHandler entry on the wiki states that the 
 last_index_time should update on both delta-import and full-import.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



minShouldMatch can't leapfrog

2012-11-13 Thread Mikhail Khludnev
Developers,

I want to discuss few points regarding disjunction form of BooleanQuery
with minShouldMatch constraint. I'm talking about doc-at-time evaluation
only (BooleanScorer2).
Look into conjunction query which has disjunction in one of its' clauses
e.g. +foo +(bar baz …). if disjunction (bar baz … ) has high
minShouldMatch constraint even if conjunction clause +foo is highly
selective this query performs quite bad. It also happens if instead of +foo
you have a filter. Once again: it's reasonable that disjunction even with
high minShouldMatch is expensive, if core disjunction with
minShouldMatch=1 matches few millions of docs. The problem is that I can't
speed it up by supplying highly selective filter.
From my POV there two points in Lucene API which make leapfrog impossible:
- advance() is obliged to return next matching doc that causes scan in
nextDoc() loop. It's great to have something like advanceExact(), or return
some magic value from advance says - failed to advance, propose next for
leapfrog;
- Scorer is obliged to jump on the first matching doc after it's created
that leads to scan many docs in nextDoc() loop;
- ConjunctiveScorer can't know which of its legs are not able to leapfrog
and prefer to decline advance()
- Stefan spotted one more gain for minShouldMatch in DisjunctionSumScorer:
We don't need to walk through top of heap after top scorer is advanced.
Instead of this, let's pop minShouldMatch scores from the heap, look at the
top after this - top doc can be evaluated as potential match which exceeds
minShouldMatch constraint. After that, let's push those guys back to the
heap advancing them to that candidate docnum. It's not cohesioned with
leapfrog problem, though.

This last one looks impressing, but I'm not smart enough to realize that it
gives performance gain. Do you think it's valuable optimization for Lucene
users? How minShouldMatch is popular,btw? To be honest I'm not really
suffer form minShouldMatch itself, I have query with my own match
verification logic and therefore lack of leapfrog bothers me much.

Looking forward for you feedback!
-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
 mkhlud...@griddynamics.com


[jira] [Commented] (SOLR-1306) Support pluggable persistence/loading of solr.xml details

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496496#comment-13496496
 ] 

Mark Miller commented on SOLR-1306:
---

I'm with yonik on this one - I think we should drop the top level config (eg 
solr.xml). Instead, we should auto load folders - no config required, but if 
you want to override some things, the config lives with the core folder. If you 
want to be able to place core folders in other locations, we could have a sys 
prop that added locations. Anything required for settings (like zkHost) would 
be passed on startup as sys props instead.

You can still load cores in parallel this way.

 Support pluggable persistence/loading of solr.xml details
 -

 Key: SOLR-1306
 URL: https://issues.apache.org/jira/browse/SOLR-1306
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
Assignee: Erick Erickson
 Fix For: 4.1

 Attachments: SOLR-1306.patch, SOLR-1306.patch, SOLR-1306.patch, 
 SOLR-1306.patch


 Persisting and loading details from one xml is fine if the no:of cores are 
 small and the no:of cores are few/fixed . If there are 10's of thousands of 
 cores in a single box adding a new core (with persistent=true) becomes very 
 expensive because every core creation has to write this huge xml. 
 Moreover , there is a good chance that the file gets corrupted and all the 
 cores become unusable . In that case I would prefer it to be stored in a 
 centralized DB which is backed up/replicated and all the information is 
 available in a centralized location. 
 We may need to refactor CoreContainer to have a pluggable implementation 
 which can load/persist the details . The default implementation should 
 write/read from/to solr.xml . And the class should be pluggable as follows in 
 solr.xml
 {code:xml}
 solr
   dataProvider class=com.foo.FooDataProvider attr1=val1 attr2=val2/
 /solr
 {code}
 There will be a new interface (or abstract class ) called SolrDataProvider 
 which this class must implement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Commit-free ExternalFileField

2012-11-13 Thread Alan Woodward
Hi Mikhail,

I would definitely be interested!  Open a JIRA, and let's see what you've come 
up with :-)

-Alan

On 13 Nov 2012, at 18:48, Mikhail Khludnev wrote:

 Community,
 I have an implementation scratch of the subj (with great test coverage, btw), 
 but before attach it into jira for ages. I want to know your opinion:
 - do you think it will be useful and widely adopted by many users?
 - do you think it's necessary to provide consistencyatomicity for case like: 
 {!func}sum(foo_extf, foo_extf) - where same field is mentioned twice in a 
 query, first reference can be evaluated differently than second one if field 
 is reloaded during query parsing/evaluation
 
 Looking forward for your feedback! 
 -- 
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics
 
 
 



[jira] [Commented] (SOLR-1920) Need generic placemarker for DIH delta-import

2012-11-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496531#comment-13496531
 ] 

Shawn Heisey commented on SOLR-1920:


In the MySQL database where my data originates, the field that I use for 
tracking what's new is an autoincrement field, mapped to a tlong in Solr.  New 
documents added to the database just get assigned the next autoincrement 
number.  If Solr could be informed that field X is the tracking field, the 
highest value encountered during an import (according to that field's sort 
mechanism) could be stored in dataimport.properties and re-used during the next 
delta-import.

If DIH is sufficiently disconnected from Solr schema internals (which actually 
seems likely), you'd have to base your sort on the SQL data type, because it 
would have no way to know what kind of field Solr has.

I currently do all delta tracking outside of Solr, so I'm already covered.  The 
generic idea seemed worthy of opening an issue two years ago, because other 
people may run into situations where they cannot use a timestamp for delta 
tracking.

I have no idea what kind of tracking problems you'd encounter when dealing with 
soft commits.  Without a transaction log, that could get ugly. For performance 
reasons, I am initially deploying 4.x with no transaction log (see SOLR-3954).


 Need generic placemarker for DIH delta-import
 -

 Key: SOLR-1920
 URL: https://issues.apache.org/jira/browse/SOLR-1920
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


 The dataimporthandler currently is only capable of saving the index timestamp 
 for later use in delta-import commands.  It should be extended to allow any 
 arbitrary data to be used as a placemarker for the next import.
 It is possible to use externally supplied variables in data-config.xml and 
 send values in via the URL that starts the import, but if the config can 
 support it natively, that is better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1920) Need generic placemarker for DIH delta-import

2012-11-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496533#comment-13496533
 ] 

Shawn Heisey commented on SOLR-1920:


Additonal note: At one time I did all document indexing using DIH -- 
full-import for reindexes and delta-import for everything else.  Now I only use 
DIH for full reindexes and SolrJ for everything else.


 Need generic placemarker for DIH delta-import
 -

 Key: SOLR-1920
 URL: https://issues.apache.org/jira/browse/SOLR-1920
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


 The dataimporthandler currently is only capable of saving the index timestamp 
 for later use in delta-import commands.  It should be extended to allow any 
 arbitrary data to be used as a placemarker for the next import.
 It is possible to use externally supplied variables in data-config.xml and 
 send values in via the URL that starts the import, but if the config can 
 support it natively, that is better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1920) Need generic placemarker for DIH delta-import

2012-11-13 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496541#comment-13496541
 ] 

James Dyer commented on SOLR-1920:
--

possibly, the fields/values from the last document added could be all stored in 
the properties file.  This would mean you'd have to sort your documents by the 
autoincrement field so that the highest one is guaranteed to be last.

Another possibility is to have it go out and execute a query to get the values 
from the database in a special get the properties query.  This would be like 
SOLR-3365, where the user wants to get last_index_time from the database 
rather than from the Solr server's clock as currently done.  To implement 
SOLR-3365, I imagine letting the user specify a query like select 
CURRENT_TIMESTAMP as 'last_index_time' from dual .  It would be just as easy 
to let any set of properties be put in such a query, for instance, select 
max(autoincrement_field) from mytable.  So maybe something can be written that 
would solve both issues?

 Need generic placemarker for DIH delta-import
 -

 Key: SOLR-1920
 URL: https://issues.apache.org/jira/browse/SOLR-1920
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


 The dataimporthandler currently is only capable of saving the index timestamp 
 for later use in delta-import commands.  It should be extended to allow any 
 arbitrary data to be used as a placemarker for the next import.
 It is possible to use externally supplied variables in data-config.xml and 
 send values in via the URL that starts the import, but if the config can 
 support it natively, that is better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Update Solr website with description of 4.0 features

2012-11-13 Thread Jan Høydahl
Hi,

The Solr web page http://lucene.apache.org/solr/ is not yet updated with the 
Solr4.0 features.

I've taken a stab of updating the top-text and the features article with the 
most important updates. Please review and comment. I have pushed my changes to 
the staging site for you to review:

http://lucene.staging.apache.org/solr/index.html
http://lucene.staging.apache.org/solr/features.html

If you want to edit yourself, please consult 
http://lucene.apache.org/site-instructions.html

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4072) Error message is incorrect for linkconfig in ZkCLI

2012-11-13 Thread Adam Hahn (JIRA)
Adam Hahn created SOLR-4072:
---

 Summary: Error message is incorrect for linkconfig in ZkCLI
 Key: SOLR-4072
 URL: https://issues.apache.org/jira/browse/SOLR-4072
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Adam Hahn
Priority: Trivial


If you don't include both the collection and confname when doing a linkconfig, 
it shows you an incorrect error message stating that the CONFDIR is required 
for linkconfig.  That should be changed to COLLECTION.  The incorrect code is 
below.

else if (line.getOptionValue(CMD).equals(LINKCONFIG)) {
  if (!line.hasOption(COLLECTION) || !line.hasOption(CONFNAME)) {
System.out.println(- + {color:red} CONFDIR {color} +  and - + 
CONFNAME
+  are required for  + LINKCONFIG);
System.exit(1);
  }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4040) SolrCloud deleteByQuery requires multiple commits

2012-11-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496755#comment-13496755
 ] 

Yonik Seeley commented on SOLR-4040:


Does anyone have step-by-step instructions on how to reproduce this with the 
example data?

 SolrCloud deleteByQuery requires multiple commits
 -

 Key: SOLR-4040
 URL: https://issues.apache.org/jira/browse/SOLR-4040
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: OSX
Reporter: Darin Plutchok
  Labels: SolrCloud, commit, delete
 Fix For: 4.0


 I am using embedded zookeeper and my cloud layout is show below (all actions 
 are done on the patents' collection only).
 First commit/delete works for a single shard only, dropping query results by 
 about a third. Second commit/delete drops query results to zero.
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (results drop by a 
 third)
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (results drop to zero)
 Note that a delete without a commit followed by a commit drops query results 
 to zero, as it should:
 http://127.0.0.1:8893/solr/patents/update/?stream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (full count as no 
 commit yet)
 http://127.0.0.1:8893/solr/patents/update/?commit=true
 http://localhost:8893/solr/patents/select?q=dogsrows=0   (results drop to 
 zero)
 One workaround (produces zero hits in one shot):
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=outerdeletequerysun/query/deletecommit//outer
 The workaround I am using for now (produces zero hits in one shot):
 http://127.0.0.1:8893/solr/patents/update?stream.body=outerdeletequeryknee/query/deletecommit/commit//outer
 {code}
 {
   
 otherdocs:{slice0:{replicas:{Darins-MacBook-Pro.local:8893_solr_otherdocs_shard0:{
   shard:slice0,
   roles:null,
   state:active,
   core:otherdocs_shard0,
   collection:otherdocs,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true,
   patents:{
 
 slice0:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard0:{
   shard:slice0,
   roles:null,
   state:active,
   core:patents_shard0,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}}},
 
 slice1:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard1:{
   shard:slice1,
   roles:null,
   state:active,
   core:patents_shard1,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}}},
 
 slice2:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard2:{
   shard:slice2,
   roles:null,
   state:active,
   core:patents_shard2,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3963) SOLR: map() does not allow passing sub-functions in 4,5 parameters

2012-11-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496777#comment-13496777
 ] 

Hoss Man commented on SOLR-3963:


Hey Bill,

First off, some specific comments on your patch...

* your tests look pretty good to me
* your description and toString methods don't look quite right -- you should be 
calling target.description() and target.toString(doc)
** both should also be using defaultVal -- that looks like an oversight in 
RangeMapFloatFunction that you copied over
* the class level javadoc comment makes no sense ... also a bug copy/pasted 
from RangeMapFloatFunction it seems
* instead of a new RangeMapFloatFunction2 i think it would make a lot more 
sense to just change RangeMapFloatFunction to use your new code and modify 
the existing constructor to call your new constructor wrapping the constants in 
ConstValueSource instances
** while we're at it we can fix that javadoc bug and the oversight of ignoring 
defaultVal in description  toString
* if we're going to support ValueSources in target  default, is there any 
reason not to support ValueSources for min  max as well?

Second: some broader comments on the overall idea that occured to me while 
reading your patch...

The changes you are proposing are definitely more general purpose then the 
current implementation -- but the trade off is that (in theory) using constant 
values is faster then dealing with nested ValueSource objects because of the 
method call overhead. so (in theory) making this change adversely affects 
people who are currently using constant values.  That doesn't mean it shouldn't 
be done -- but it's worthwhile taking a moment to think about wether there is a 
best of both worlds situation.

Unless i'm missing something, what you want to do...

{noformat}map(nested(...),min,max,target(...),default(...)){noformat}

...should already possible using something like...

{noformat}if(map(nested(...),min,max,1,0),target(...),default(...)){noformat}

...and that should have roughly the same performance as your your suggested 
change, w/o affecting the performance for people who are only using constants.

So perhaps we should actually just leave the code in the {{map(...)}} function 
alone, and instead improve it's docs to clarify that for people who want more 
complex non-constant values they can use that {{if(...)}} pattern.

what do you think?


 SOLR: map() does not allow passing sub-functions in 4,5 parameters
 --

 Key: SOLR-3963
 URL: https://issues.apache.org/jira/browse/SOLR-3963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Bill Bell
Assignee: Hoss Man
Priority: Minor
 Fix For: 4.0

 Attachments: SOLR-3963.2.patch


 I want to do:
 boost=map(achievement_count,1,1000,recip(achievement_count,-.5,10,25),1)
 I want to return recip(achievement_count,-.5,10,25) if achievement_count is 
 between 1 and 1,000. FOr any other values I want to return 1.
 I cannot get it to work. I get the error below. Interesting this does work:
 boost=recip(map(achievement_count,0,0,-200),-.5,10,25)
 It almost appears that map() cannot take a function.
  Specified argument was out of the range of valid values.
 Parameter name: value
 Description: An unhandled exception occurred during the execution of the 
 current web request. Please review the stack trace for more information about 
 the error and where it originated in the code.
 Exception Details: System.ArgumentOutOfRangeException: Specified argument was 
 out of the range of valid values.
 Parameter name: value
 Source Error:
 An unhandled exception was generated during the execution of the current web 
 request. Information regarding the origin and location of the exception can 
 be identified using the exception stack trace below.
 Stack Trace:
 [ArgumentOutOfRangeException: Specified argument was out of the range of 
 valid values.
 Parameter name: value]
System.Web.HttpResponse.set_StatusDescription(String value) +5200522
FacilityService.Controllers.FacilityController.ActionCompleted(String 
 actionName, IFacilityResults results) +265

 FacilityService.Controllers.FacilityController.SearchByPointCompleted(IFacilityResults
  results) +25
lambda_method(Closure , ControllerBase , Object[] ) +114
System.Web.Mvc.Async.c__DisplayClass7.BeginExecuteb__5(IAsyncResult 
 asyncResult) +283

 System.Web.Mvc.Async.c__DisplayClass41.BeginInvokeAsynchronousActionMethodb__40(IAsyncResult
  asyncResult) +22

 System.Web.Mvc.Async.c__DisplayClass3b.BeginInvokeActionMethodWithFiltersb__35()
  +120

 System.Web.Mvc.Async.c__DisplayClass51.InvokeActionMethodFilterAsynchronouslyb__4b()
  +452

 

[jira] [Commented] (SOLR-4040) SolrCloud deleteByQuery requires multiple commits

2012-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496817#comment-13496817
 ] 

Mark Miller commented on SOLR-4040:
---

This is prob a dupe of the issue around the commit race?

 SolrCloud deleteByQuery requires multiple commits
 -

 Key: SOLR-4040
 URL: https://issues.apache.org/jira/browse/SOLR-4040
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: OSX
Reporter: Darin Plutchok
  Labels: SolrCloud, commit, delete
 Fix For: 4.0


 I am using embedded zookeeper and my cloud layout is show below (all actions 
 are done on the patents' collection only).
 First commit/delete works for a single shard only, dropping query results by 
 about a third. Second commit/delete drops query results to zero.
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (results drop by a 
 third)
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (results drop to zero)
 Note that a delete without a commit followed by a commit drops query results 
 to zero, as it should:
 http://127.0.0.1:8893/solr/patents/update/?stream.body=deletequerydogs/query/delete
 http://localhost:8893/solr/patents/select?q=dogsrows=0 (full count as no 
 commit yet)
 http://127.0.0.1:8893/solr/patents/update/?commit=true
 http://localhost:8893/solr/patents/select?q=dogsrows=0   (results drop to 
 zero)
 One workaround (produces zero hits in one shot):
 http://127.0.0.1:8893/solr/patents/update?commit=truestream.body=outerdeletequerysun/query/deletecommit//outer
 The workaround I am using for now (produces zero hits in one shot):
 http://127.0.0.1:8893/solr/patents/update?stream.body=outerdeletequeryknee/query/deletecommit/commit//outer
 {code}
 {
   
 otherdocs:{slice0:{replicas:{Darins-MacBook-Pro.local:8893_solr_otherdocs_shard0:{
   shard:slice0,
   roles:null,
   state:active,
   core:otherdocs_shard0,
   collection:otherdocs,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true,
   patents:{
 
 slice0:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard0:{
   shard:slice0,
   roles:null,
   state:active,
   core:patents_shard0,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}}},
 
 slice1:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard1:{
   shard:slice1,
   roles:null,
   state:active,
   core:patents_shard1,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}}},
 
 slice2:{replicas:{Darins-MacBook-Pro.local:8893_solr_patents_shard2:{
   shard:slice2,
   roles:null,
   state:active,
   core:patents_shard2,
   collection:patents,
   node_name:Darins-MacBook-Pro.local:8893_solr,
   base_url:http://Darins-MacBook-Pro.local:8893/solr;,
   leader:true}
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1306) Support pluggable persistence/loading of solr.xml details

2012-11-13 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496844#comment-13496844
 ] 

Lance Norskog commented on SOLR-1306:
-

bq.  think we should drop the top level config (eg solr.xml). Instead, we 
should auto load folders 
+1 

There are often groups of cores with the same schema- shards in the same solr, 
for example. How would this dynamic discovery support groups of collections?



 Support pluggable persistence/loading of solr.xml details
 -

 Key: SOLR-1306
 URL: https://issues.apache.org/jira/browse/SOLR-1306
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Reporter: Noble Paul
Assignee: Erick Erickson
 Fix For: 4.1

 Attachments: SOLR-1306.patch, SOLR-1306.patch, SOLR-1306.patch, 
 SOLR-1306.patch


 Persisting and loading details from one xml is fine if the no:of cores are 
 small and the no:of cores are few/fixed . If there are 10's of thousands of 
 cores in a single box adding a new core (with persistent=true) becomes very 
 expensive because every core creation has to write this huge xml. 
 Moreover , there is a good chance that the file gets corrupted and all the 
 cores become unusable . In that case I would prefer it to be stored in a 
 centralized DB which is backed up/replicated and all the information is 
 available in a centralized location. 
 We may need to refactor CoreContainer to have a pluggable implementation 
 which can load/persist the details . The default implementation should 
 write/read from/to solr.xml . And the class should be pluggable as follows in 
 solr.xml
 {code:xml}
 solr
   dataProvider class=com.foo.FooDataProvider attr1=val1 attr2=val2/
 /solr
 {code}
 There will be a new interface (or abstract class ) called SolrDataProvider 
 which this class must implement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.6.0_37) - Build # 1614 - Failure!

2012-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1614/
Java: 32bit/jdk1.6.0_37 -server -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
stage 3: inconsistent endOffset at pos=5: 15 vs 10; token=ޞ i ivqk_i

Stack Trace:
java.lang.IllegalStateException: stage 3: inconsistent endOffset at pos=5: 15 
vs 10; token=ޞ i ivqk_i
at 
__randomizedtesting.SeedInfo.seed([5D0182766DE523A4:60E0AB172AF73E64]:0)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:135)
at 
org.apache.lucene.analysis.hi.HindiNormalizationFilter.incrementToken(HindiNormalizationFilter.java:51)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:78)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:632)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:542)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:443)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:859)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-4066) SolrZKClient changed interface

2012-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496912#comment-13496912
 ] 

Trym Møller commented on SOLR-4066:
---

The provided solution of 3989 is fine with me, my proposal was just to keep the 
orginal behaviour.

Best regards Trym

 SolrZKClient changed interface
 --

 Key: SOLR-4066
 URL: https://issues.apache.org/jira/browse/SOLR-4066
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0, 4.0.1, 4.1
 Environment: Any
Reporter: Trym Møller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4066.patch


 The constructor of SolrZKClient has changed, I expect to ensure clean up of 
 resources. The strategy is as follows:
 {code}
 connManager = new ConnectionManager(...)
 try {
...
 } catch (Throwable e) {
   connManager.close();
   throw new RuntimeException();
 }
 try {
   connManager.waitForConnected(clientConnectTimeout);
 } catch (Throwable e) {
   connManager.close();
   throw new RuntimeException();
 }
 {code}
 This results in a different exception (RuntimeException) returned from the 
 constructor as earlier (nice exceptions as UnknownHostException, 
 TimeoutException).
 Can this be changed so we keep the old nice exceptions e.g. as follows 
 (requiring the constructor to declare these) or at least include them as 
 cause in the RuntimeException?
 {code}
 boolean closeBecauseOfException = true;
 try {
 ...
connManager.waitForConnected(clientConnectTimeout);
closeBecauseOfException = false
 } finally {
 if (closeBecauseOfException) {
 connManager.close();
 }
 } 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >