[jira] Commented: (SOLR-1447) Simple property injection

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758608#action_12758608
 ] 

Noble Paul commented on SOLR-1447:
--

committed r817976

thanks Jason

 Simple property injection 
 --

 Key: SOLR-1447
 URL: https://issues.apache.org/jira/browse/SOLR-1447
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Jason Rutherglen
Assignee: Noble Paul
 Fix For: 1.4

 Attachments: SOLR-1447.patch, SOLR-1447.patch, SOLR-1447.patch, 
 SOLR-1447.patch, SOLR-1447.patch, SOLR-1447.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 MergePolicy and MergeScheduler require property injection.  We'll allow these 
 and probably other cases in this patch using Java reflection.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Error Compiling the Source ExtractRequestHandler

2009-09-23 Thread busbus

Hello All,

I got good help from this Community.Thanks for that.

Now i need a simple help.

I got the Source files For Uploading documents to Solr for Indexing(Thanks
ryguasu )

ExtractRequestHandler is the Class which does the Upload Operation.I got the
Source files and its corresponding support files.Now i need to Compile the
file to make the Solr run.

But in the source file, a method Implements the Class
ContentStreamHandlerBase and it requires ContentStreamLoader.And there
is a Class for dateFormat.Am lacking in that also


am not able to find those classes even alone or inside any JAR files.

I think, am in the final CheckBox to complete my work.

Some help will be Worshiped.


Thankyou.
-- 
View this message in context: 
http://www.nabble.com/Error-Compiling-the-Source-ExtractRequestHandler-tp25531022p25531022.html
Sent from the Solr - Dev mailing list archive at Nabble.com.



Solr nightly build failure

2009-09-23 Thread solr-dev

init-forrest-entities:
[mkdir] Created dir: /tmp/apache-solr-nightly/build
[mkdir] Created dir: /tmp/apache-solr-nightly/build/web

compile-solrj:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/solrj
[javac] Compiling 86 source files to /tmp/apache-solr-nightly/build/solrj
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

compile:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/solr
[javac] Compiling 387 source files to /tmp/apache-solr-nightly/build/solr
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

compileTests:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/tests
[javac] Compiling 176 source files to /tmp/apache-solr-nightly/build/tests
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

solr-cell-example:

init:
[mkdir] Created dir: 
/tmp/apache-solr-nightly/contrib/extraction/build/classes
[mkdir] Created dir: /tmp/apache-solr-nightly/build/docs/api

init-forrest-entities:

compile-solrj:

compile:
[javac] Compiling 1 source file to /tmp/apache-solr-nightly/build/solr
[javac] Note: 
/tmp/apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java
 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.

make-manifest:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/META-INF

compile:
[javac] Compiling 6 source files to 
/tmp/apache-solr-nightly/contrib/extraction/build/classes
[javac] Note: 
/tmp/apache-solr-nightly/contrib/extraction/src/main/java/org/apache/solr/handler/extraction/ExtractingDocumentLoader.java
 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

build:
  [jar] Building jar: 
/tmp/apache-solr-nightly/contrib/extraction/build/apache-solr-cell-nightly.jar

example:
 [copy] Copying 1 file to /tmp/apache-solr-nightly/example/solr/lib
 [copy] Copying 26 files to /tmp/apache-solr-nightly/example/solr/lib

junit:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/test-results
[junit] Running org.apache.solr.BasicFunctionalityTest
[junit] Tests run: 20, Failures: 0, Errors: 0, Time elapsed: 47.527 sec
[junit] Running org.apache.solr.ConvertedLegacyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 30.287 sec
[junit] Running org.apache.solr.DisMaxRequestHandlerTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 16.619 sec
[junit] Running org.apache.solr.EchoParamsTest
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 4.605 sec
[junit] Running org.apache.solr.MinimalSchemaTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 7.492 sec
[junit] Running org.apache.solr.OutputWriterTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.861 sec
[junit] Running org.apache.solr.SampleTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 5.863 sec
[junit] Running org.apache.solr.SolrInfoMBeanTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.284 sec
[junit] Running org.apache.solr.TestDistributedSearch
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 100.19 sec
[junit] Running org.apache.solr.TestPluginEnable
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.439 sec
[junit] Running org.apache.solr.TestSolrCoreProperties
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 6.128 sec
[junit] Running org.apache.solr.TestTrie
[junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 14.724 sec
[junit] Running org.apache.solr.analysis.CommonGramsFilterFactoryTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.355 sec
[junit] Running org.apache.solr.analysis.CommonGramsFilterTest
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.606 sec
[junit] Running org.apache.solr.analysis.CommonGramsQueryFilterFactoryTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 11.631 sec
[junit] Running org.apache.solr.analysis.DoubleMetaphoneFilterFactoryTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.506 sec
[junit] Running org.apache.solr.analysis.DoubleMetaphoneFilterTest
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.657 sec
[junit] Running 

Hudson build is back to normal: Solr-trunk #933

2009-09-23 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Solr-trunk/933/changes




[jira] Resolved: (SOLR-1453) Tree facet with stats component

2009-09-23 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-1453.
-

Resolution: Invalid

Please ask questions on solr-user mailing list before creating jira issues.

 Tree facet with stats component
 ---

 Key: SOLR-1453
 URL: https://issues.apache.org/jira/browse/SOLR-1453
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
Reporter: sumit biyani
Priority: Minor

 Hi , 
 My requirement is to fetch results like below sql query:
 Select  txn_code,
   txn_server,
   max(total_time), 
   min(total_time),
   max(db_time), 
   min(db_time),
 from txn_table
 group by txn_code,txn_server.
 - statsComponent helps to get  aggregate results  stats.facet is available 
 for single field , but my requirement is to have multi level facet on more 
 than 1 field while fetching results.
 Please let me know , how to get this using solr , do we have this planned in 
 any upcoming release, or any patch is available ?
  
 Thanks  Regards,
 Sumit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1437) DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.

2009-09-23 Thread Fergus McMenemie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758652#action_12758652
 ] 

Fergus McMenemie commented on SOLR-1437:


Noble,

Playing with the code... some observations I would like confirmed.

1) inside parse() the valuesAddedinThisFrame HashSet and the StackSetString 
stack variables are only used to aid in the clean up after out-puting  record.

2) The code seems unable to collect text for a forEach xpath. So for the 
following fragment of code

{code}
String xml=root\n
 +   statuslive/status\n
 +   contenido id=\10097\ idioma=\cat\\n
 + Cats can be cute\n
 + antetitulo/antetitulo\n
 + titulo\n   This is my title\n/titulo\n
 + resumen\n  This is my summary\n   /resumen\n
 + texto\n This is the body of my text\n   /texto\n
 + /contenido\n
 + /root;
XPathRecordReader rr = new XPathRecordReader(/root/contenido);
rr.addField(cat   ,/root/contenido, false); //  * FAILS *
rr.addField(id,/root/contenido/@id, false);
{code}

we can get the string associated with the id attrbute of contenido but not 
its child text! Is this a design goal, or just the way the code ended up 
behaving. Do we want it to continue to work this way?

 DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.
 

 Key: SOLR-1437
 URL: https://issues.apache.org/jira/browse/SOLR-1437
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Fergus McMenemie
Assignee: Noble Paul
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1437.patch, SOLR-1437.patch

   Original Estimate: 672h
  Remaining Estimate: 672h

 As per 
 http://www.nabble.com/Re%3A-Extract-info-from-parent-node-during-data-import-%28redirect%3A%29-td25471162.html
  it would be nice to be able to use expressions such as //tagname when 
 parsing XML documents.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Error Compiling the Source ExtractRequestHandler

2009-09-23 Thread Grant Ingersoll
Where/How did you get the source for the ExtractingRequestHandler?  It  
sounds like you are missing the core.  I'd recommend checking out the  
source from trunk using SVN.


Also, this type of question is best asked on the user mailing list in  
the future.


On Sep 23, 2009, at 3:50 AM, busbus wrote:



Hello All,

I got good help from this Community.Thanks for that.

Now i need a simple help.

I got the Source files For Uploading documents to Solr for Indexing 
(Thanks

ryguasu )

ExtractRequestHandler is the Class which does the Upload Operation.I  
got the
Source files and its corresponding support files.Now i need to  
Compile the

file to make the Solr run.

But in the source file, a method Implements the Class
ContentStreamHandlerBase and it requires ContentStreamLoader.And  
there

is a Class for dateFormat.Am lacking in that also


am not able to find those classes even alone or inside any JAR files.

I think, am in the final CheckBox to complete my work.

Some help will be Worshiped.


Thankyou.
--
View this message in context: 
http://www.nabble.com/Error-Compiling-the-Source-ExtractRequestHandler-tp25531022p25531022.html
Sent from the Solr - Dev mailing list archive at Nabble.com.



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



[jira] Commented: (SOLR-1437) DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758663#action_12758663
 ] 

Noble Paul commented on SOLR-1437:
--

it is not an intentional. 

 DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.
 

 Key: SOLR-1437
 URL: https://issues.apache.org/jira/browse/SOLR-1437
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Fergus McMenemie
Assignee: Noble Paul
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1437.patch, SOLR-1437.patch

   Original Estimate: 672h
  Remaining Estimate: 672h

 As per 
 http://www.nabble.com/Re%3A-Extract-info-from-parent-node-during-data-import-%28redirect%3A%29-td25471162.html
  it would be nice to be able to use expressions such as //tagname when 
 parsing XML documents.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1437) DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758664#action_12758664
 ] 

Noble Paul commented on SOLR-1437:
--

bq. inside parse() the valuesAddedinThisFrame HashSet and the 
StackSetString stack variables are only used to aid in the clean up after 
out-puting record.

yes,

there is a testcase sameForEachAndXpath()  which has same forEach and field. 
So, something is strange  here

 DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.
 

 Key: SOLR-1437
 URL: https://issues.apache.org/jira/browse/SOLR-1437
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Fergus McMenemie
Assignee: Noble Paul
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1437.patch, SOLR-1437.patch

   Original Estimate: 672h
  Remaining Estimate: 672h

 As per 
 http://www.nabble.com/Re%3A-Extract-info-from-parent-node-during-data-import-%28redirect%3A%29-td25471162.html
  it would be nice to be able to use expressions such as //tagname when 
 parsing XML documents.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Solr nightly build failure

2009-09-23 Thread Yonik Seeley
Exception that caused the failure shown below...
is this an exception that should be handled internally by LBHttpSolrServer?

-Yonik
http://www.lucidimagination.com

  testcase classname=org.apache.solr.client.solrj.TestLBHttpSolrServer
name=testSimple time=22.99/testcase
  testcase classname=org.apache.solr.client.solrj.TestLBHttpSolrServer
name=testTwoServers time=7.81
error message=java.lang.IllegalStateException: Connection is not
open 
type=org.apache.solr.client.solrj.SolrServerExceptionorg.apache.solr.client.solrj.SolrServerException:
java.lang.Ille
galStateException: Connection is not open
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:217)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:89)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:118)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testTwoServers(TestLBHttpSolrServer.java:130)
Caused by: java.lang.IllegalStateException: Connection is not open
at 
org.apache.commons.httpclient.HttpConnection.assertOpen(HttpConnection.java:1277)
at 
org.apache.commons.httpclient.HttpConnection.getResponseInputStream(HttpConnection.java:858)
at 
org.apache.commons.httpclient.HttpMethodBase.readResponseHeaders(HttpMethodBase.java:1935)
at 
org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1737)
at 
org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:415)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:242)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:205)
/error
  /testcase




On Wed, Sep 23, 2009 at 4:41 AM,  solr-dev@lucene.apache.org wrote:

 init-forrest-entities:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/web

 compile-solrj:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/solrj
    [javac] Compiling 86 source files to /tmp/apache-solr-nightly/build/solrj
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 compile:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/solr
    [javac] Compiling 387 source files to /tmp/apache-solr-nightly/build/solr
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 compileTests:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/tests
    [javac] Compiling 176 source files to /tmp/apache-solr-nightly/build/tests
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 solr-cell-example:

 init:
    [mkdir] Created dir: 
 /tmp/apache-solr-nightly/contrib/extraction/build/classes
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/docs/api

 init-forrest-entities:

 compile-solrj:

 compile:
    [javac] Compiling 1 source file to /tmp/apache-solr-nightly/build/solr
    [javac] Note: 
 /tmp/apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java
  uses or overrides a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.

 make-manifest:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/META-INF

 compile:
    [javac] Compiling 6 source files to 
 /tmp/apache-solr-nightly/contrib/extraction/build/classes
    [javac] Note: 
 /tmp/apache-solr-nightly/contrib/extraction/src/main/java/org/apache/solr/handler/extraction/ExtractingDocumentLoader.java
  uses unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 build:
      [jar] Building jar: 
 /tmp/apache-solr-nightly/contrib/extraction/build/apache-solr-cell-nightly.jar

 example:
     [copy] Copying 1 file to /tmp/apache-solr-nightly/example/solr/lib
     [copy] Copying 26 files to /tmp/apache-solr-nightly/example/solr/lib

 junit:
    [mkdir] Created dir: 

Re: Solr nightly build failure

2009-09-23 Thread Noble Paul നോബിള്‍ नोब्ळ्
I guess yes. But in this case it should have retried. And probably the
next attempt would have succeeded.



On Wed, Sep 23, 2009 at 7:48 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
 Exception that caused the failure shown below...
 is this an exception that should be handled internally by LBHttpSolrServer?

 -Yonik
 http://www.lucidimagination.com

  testcase classname=org.apache.solr.client.solrj.TestLBHttpSolrServer
 name=testSimple time=22.99/testcase
  testcase classname=org.apache.solr.client.solrj.TestLBHttpSolrServer
 name=testTwoServers time=7.81
    error message=java.lang.IllegalStateException: Connection is not
 open 
 type=org.apache.solr.client.solrj.SolrServerExceptionorg.apache.solr.client.solrj.SolrServerException:
 java.lang.Ille
 galStateException: Connection is not open
        at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:217)
        at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:89)
        at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:118)
        at 
 org.apache.solr.client.solrj.TestLBHttpSolrServer.testTwoServers(TestLBHttpSolrServer.java:130)
 Caused by: java.lang.IllegalStateException: Connection is not open
        at 
 org.apache.commons.httpclient.HttpConnection.assertOpen(HttpConnection.java:1277)
        at 
 org.apache.commons.httpclient.HttpConnection.getResponseInputStream(HttpConnection.java:858)
        at 
 org.apache.commons.httpclient.HttpMethodBase.readResponseHeaders(HttpMethodBase.java:1935)
        at 
 org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1737)
        at 
 org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
        at 
 org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
        at 
 org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
        at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
        at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
        at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:415)
        at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:242)
        at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:205)
 /error
  /testcase




 On Wed, Sep 23, 2009 at 4:41 AM,  solr-dev@lucene.apache.org wrote:

 init-forrest-entities:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/web

 compile-solrj:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/solrj
    [javac] Compiling 86 source files to /tmp/apache-solr-nightly/build/solrj
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 compile:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/solr
    [javac] Compiling 387 source files to /tmp/apache-solr-nightly/build/solr
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 compileTests:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/tests
    [javac] Compiling 176 source files to /tmp/apache-solr-nightly/build/tests
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 solr-cell-example:

 init:
    [mkdir] Created dir: 
 /tmp/apache-solr-nightly/contrib/extraction/build/classes
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/docs/api

 init-forrest-entities:

 compile-solrj:

 compile:
    [javac] Compiling 1 source file to /tmp/apache-solr-nightly/build/solr
    [javac] Note: 
 /tmp/apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java
  uses or overrides a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.

 make-manifest:
    [mkdir] Created dir: /tmp/apache-solr-nightly/build/META-INF

 compile:
    [javac] Compiling 6 source files to 
 /tmp/apache-solr-nightly/contrib/extraction/build/classes
    [javac] Note: 
 /tmp/apache-solr-nightly/contrib/extraction/src/main/java/org/apache/solr/handler/extraction/ExtractingDocumentLoader.java
  uses unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

 build:
      [jar] Building jar: 
 

[jira] Commented: (SOLR-844) A SolrServer impl to front-end multiple urls

2009-09-23 Thread Jason Falk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758841#action_12758841
 ] 

Jason Falk commented on SOLR-844:
-

There seems to be an issue here with the check interval.  The documentation and 
even the default value assume that the check interval is measured in 
milliseconds, when in fact the code below has the TimeUnit as seconds.

getAliveCheckRunner(new WeakReferenceLBHttpSolrServer(this)), this.interval, 
this.interval, TimeUnit.SECONDS);

So basically right now the check interval is 60,000 seconds...yikes.

 A SolrServer impl to front-end multiple urls
 

 Key: SOLR-844
 URL: https://issues.apache.org/jira/browse/SOLR-844
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 1.3
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: 1.4

 Attachments: SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, 
 SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, SOLR-844.patch


 Currently a {{CommonsHttpSolrServer}} can talk to only one server. This 
 demands that the user have a LoadBalancer or do the roundrobin on their own. 
 We must have a {{LBHttpSolrServer}} which must automatically do a 
 Loadbalancing between multiple hosts. This can be backed by the 
 {{CommonsHttpSolrServer}}
 This can have the following other features
 * Automatic failover
 * Optionally take in  a file /url containing the the urls of servers so that 
 the server list can be automatically updated  by periodically loading the 
 config
 * Support for adding removing servers during runtime
 * Pluggable Loadbalancing mechanism. (round-robin, weighted round-robin, 
 random etc)
 * Pluggable Failover mechanisms

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758842#action_12758842
 ] 

Yonik Seeley commented on SOLR-1449:


bq. repalces the ClassLoader with a new one where the parent is fixed on the 
previous classloader

I'm not a classloading expert... but couldn't this cause things to work 
differently by changing the order of the lib statements?


 solrconfig.xml syntax to add classpath elements from outside of instanceDir
 ---

 Key: SOLR-1449
 URL: https://issues.apache.org/jira/browse/SOLR-1449
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Fix For: 1.4

 Attachments: SOLR-1449.patch


 the idea has been discussed numerous times that it would be nice if there was 
 a way to configure a core to load plugins from specific jars (or classes 
 style directories) by path  w/o needing to copy them to the ./lib dir in 
 the instanceDir.
 The current workaround is symlinks but that doesn't really help the 
 situation of the Solr Release artifacts, where we wind up making numerous 
 copies of jars to support multiple example directories (you can't have 
 reliable symlinks in zip files)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1314) Upgrade Carrot2 to version 3.1.0

2009-09-23 Thread Stanislaw Osinski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758843#action_12758843
 ] 

Stanislaw Osinski commented on SOLR-1314:
-

Hi Grant,

I've made Carrot2's dependency on Smart Chinese Analyzer optional, so no 
exceptions should be thrown when the big JAR is not in the classpath. As usual, 
download from here:

http://download.carrot2.org/maven2/org/carrot2/carrot2-mini/3.1-dev/

S.

 Upgrade Carrot2 to version 3.1.0
 

 Key: SOLR-1314
 URL: https://issues.apache.org/jira/browse/SOLR-1314
 Project: Solr
  Issue Type: Task
Reporter: Stanislaw Osinski
Assignee: Grant Ingersoll
 Fix For: 1.4


 As soon as Lucene 2.9 is releases, Carrot2 3.1.0 will come out with bug fixes 
 in clustering algorithms and improved clustering in Chinese. The upgrade 
 should be a matter of upgrading {{carrot2-mini.jar}} and 
 {{google-collections.jar}}.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1437) DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.

2009-09-23 Thread Fergus McMenemie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758860#action_12758860
 ] 

Fergus McMenemie commented on SOLR-1437:


Apologies! my testcase which uses the same Xpath with the forEach and field 
value works; once I removed the typo!



 DIH: Enhance XPathRecordReader to deal with //tagname and other improvments.
 

 Key: SOLR-1437
 URL: https://issues.apache.org/jira/browse/SOLR-1437
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Fergus McMenemie
Assignee: Noble Paul
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1437.patch, SOLR-1437.patch

   Original Estimate: 672h
  Remaining Estimate: 672h

 As per 
 http://www.nabble.com/Re%3A-Extract-info-from-parent-node-during-data-import-%28redirect%3A%29-td25471162.html
  it would be nice to be able to use expressions such as //tagname when 
 parsing XML documents.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



large OR-boolean query

2009-09-23 Thread Luo, Jeff
Hi,

We are experimenting a parallel approach to issue a large OR-Boolean
query, e.g., keywords:(1 OR 2 OR 3 OR ... OR 102400), against several
solr shards.

The way we are trying is to break the large query into smaller ones,
e.g.,  
the example above can be broken into 10 small queries: keywords:(1 OR 2
OR 3 OR ... OR 1024), keywords:(1025 OR 1026 OR 1027 OR ... OR 2048),
etc

Now each shard will get 10 requests and the master will merge the
results coming back from each shard, similar to the regular distributed
search. 

There are a few issues with this approach, though:

Issue #1: sorting: 
ShardFieldSortedHitQueue.lessThan() checks if two documents are from the
same shard; if yes, it will just compare the orderInShard. However, in
our approach, two documents from the same shard could be results of two
different small queries, in that case, we can't just compare the
orderInShard. There is a simple solution to this issue: just change the
if statement to :
if (docA.shard == docB.shard  docA.sortFieldValues ==
docB.sortFieldValues)

Can someone make this change for Solr1.4 so that we don't have to
customize it? It does NOT impact the normal case.

Issue #2: number of matching documents found:
Since multiple responses from the same shard could contain duplicated
documents, the sum of number of documents found from each response will
be greater than the real number of documents found. 

Unless we ask each shard to return all the matching records for every
small query and then check duplicates, we can't get the accurate number
of documents found.

Issue #3: faceting counts
Due to the same limitation as in issue #2, the faceting count is always
greater than the correct one.

I don't have any idea how this issue can be resolved.


[jira] Commented: (SOLR-844) A SolrServer impl to front-end multiple urls

2009-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758864#action_12758864
 ] 

Yonik Seeley commented on SOLR-844:
---

Thanks Jason, I just committed a fix for this that changed it to milliseconds.

 A SolrServer impl to front-end multiple urls
 

 Key: SOLR-844
 URL: https://issues.apache.org/jira/browse/SOLR-844
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 1.3
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: 1.4

 Attachments: SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, 
 SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, SOLR-844.patch, SOLR-844.patch


 Currently a {{CommonsHttpSolrServer}} can talk to only one server. This 
 demands that the user have a LoadBalancer or do the roundrobin on their own. 
 We must have a {{LBHttpSolrServer}} which must automatically do a 
 Loadbalancing between multiple hosts. This can be backed by the 
 {{CommonsHttpSolrServer}}
 This can have the following other features
 * Automatic failover
 * Optionally take in  a file /url containing the the urls of servers so that 
 the server list can be automatically updated  by periodically loading the 
 config
 * Support for adding removing servers during runtime
 * Pluggable Loadbalancing mechanism. (round-robin, weighted round-robin, 
 random etc)
 * Pluggable Failover mechanisms

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1321) Support for efficient leading wildcards search

2009-09-23 Thread Ravi Kiran Bhaskar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758868#action_12758868
 ] 

Ravi Kiran Bhaskar commented on SOLR-1321:
--

While using ReversedWildcardFilterFactory with KeywordTokenizerFactory I get 
the following error for the fieldType

fieldType name=keywordText class=solr.TextField sortMissingLast=true 
omitNorms=true positionIncrementGap=100
  analyzer type=index   
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.TrimFilterFactory /
filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords.txt enablePositionIncrements=true/
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
ignoreCase=true expand=true /  
filter class=solr.RemoveDuplicatesTokenFilterFactory/   
  /analyzer
  analyzer type=query
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.TrimFilterFactory /
filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords.txt enablePositionIncrements=true /   
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
ignoreCase=true expand=true / 
filter class=solr.RemoveDuplicatesTokenFilterFactory/
  /analyzer
/fieldType

ERROR

HTTP Status 500 - 
org.apache.solr.analysis.WhitespaceTokenizerFactory.create(Ljava/io/Reader;)Lorg/apache/lucene/analysis/Tokenizer;
 java.lang.AbstractMethodError: 
org.apache.solr.analysis.WhitespaceTokenizerFactory.create(Ljava/io/Reader;)Lorg/apache/lucene/analysis/Tokenizer;
 at org.apache.solr.analysis.TokenizerChain.getStream(TokenizerChain.java:69) 
at 
org.apache.solr.analysis.SolrAnalyzer.reusableTokenStream(SolrAnalyzer.java:74) 
at 
org.apache.solr.schema.IndexSchema$SolrIndexAnalyzer.reusableTokenStream(IndexSchema.java:364)
 at 
org.apache.lucene.queryParser.QueryParser.getFieldQuery(QueryParser.java:543) 
at 
org.apache.solr.search.SolrQueryParser.getFieldQuery(SolrQueryParser.java:153) 
at 
org.apache.solr.util.SolrPluginUtils$DisjunctionMaxQueryParser.getFieldQuery(SolrPluginUtils.java:807)
 at 
org.apache.solr.util.SolrPluginUtils$DisjunctionMaxQueryParser.getFieldQuery(SolrPluginUtils.java:794)
 at org.apache.lucene.queryParser.QueryParser.Term(QueryParser.java:1425) at 
org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1313) at 
org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1241) at 
org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1230) 
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:176) at 
org.apache.solr.search.DisMaxQParser.getUserQuery(DisMaxQParser.java:195) at 
org.apache.solr.search.DisMaxQParser.addMainQuery(DisMaxQParser.java:158) at 
org.apache.solr.search.DisMaxQParser.parse(DisMaxQParser.java:74) at 
org.apache.solr.search.QParser.getQuery(QParser.java:131) at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:89)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:174)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1313) at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) 
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
 at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:198)
 at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:297)
 at 
org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271)
 at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202)
 at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632) 
at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577) 
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:94) at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:206) 
at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632) 
at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577) 
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571) 
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080) at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:150)
 at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632) 
at 
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577) 
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571) 
at 

Re: large OR-boolean query

2009-09-23 Thread Yonik Seeley
On Wed, Sep 23, 2009 at 4:26 PM, Luo, Jeff j...@cas.org wrote:
 We are experimenting a parallel approach to issue a large OR-Boolean
 query, e.g., keywords:(1 OR 2 OR 3 OR ... OR 102400), against several
 solr shards.

 The way we are trying is to break the large query into smaller ones,
 e.g.,
 the example above can be broken into 10 small queries: keywords:(1 OR 2
 OR 3 OR ... OR 1024), keywords:(1025 OR 1026 OR 1027 OR ... OR 2048),
 etc

 Now each shard will get 10 requests and the master will merge the
 results coming back from each shard, similar to the regular distributed
 search.

You're going to end up with a lot of custom code I think.
Where's the bottleneck... searching or faceting?

If faceting is the bottleneck, making an implementation that utilized
multiple threads would be one of the best ways.
If searching, you could develop a custom query type (QParserPlugin)
that handled your type of queries and split them across multiple
threads.

-Yonik
http://www.lucidimagination.com


[jira] Commented: (SOLR-1321) Support for efficient leading wildcards search

2009-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758873#action_12758873
 ] 

Yonik Seeley commented on SOLR-1321:


Works for me in the latest trunk - perhaps you have some old class files laying 
around somewhere? That's normally the case with something like 
AbstractMethodError.

 Support for efficient leading wildcards search
 --

 Key: SOLR-1321
 URL: https://issues.apache.org/jira/browse/SOLR-1321
 Project: Solr
  Issue Type: Improvement
  Components: Analysis
Affects Versions: 1.4
Reporter: Andrzej Bialecki 
Assignee: Grant Ingersoll
 Fix For: 1.4

 Attachments: SOLR-1321.patch, SOLR-1321.patch, SOLR-1321.patch, 
 wildcards-2.patch, wildcards-3.patch, wildcards.patch


 This patch is an implementation of the reversed tokens strategy for 
 efficient leading wildcards queries.
 ReversedWildcardsTokenFilter reverses tokens and returns both the original 
 token (optional) and the reversed token (with positionIncrement == 0). 
 Reversed tokens are prepended with a marker character to avoid collisions 
 between legitimate tokens and the reversed tokens - e.g. DNA would become 
 and, thus colliding with the regular term and, but with the marker 
 character it becomes \u0001and.
 This TokenFilter can be added to the analyzer chain that it used during 
 indexing.
 SolrQueryParser has been modified to detect the presence of such fields in 
 the current schema, and treat them in a special way. First, SolrQueryParser 
 examines the schema and collects a map of fields where these reversed tokens 
 are indexed. If there is at least one such field, it also sets 
 QueryParser.setAllowLeadingWildcards(true). When building a wildcard query 
 (in getWildcardQuery) the term text may be optionally reversed to put 
 wildcards further along the term text. This happens when the field uses the 
 reversing filter during indexing (as detected above), AND if the wildcard 
 characters are either at 0-th or 1-st position in the term. Otherwise the 
 term text is processed as before, i.e. turned into a regular wildcard query.
 Unit tests are provided to test the TokenFilter and the query parsing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1454) Error during auto-warming OutOfMemoryError

2009-09-23 Thread kareem shahin (JIRA)
Error during auto-warming OutOfMemoryError
--

 Key: SOLR-1454
 URL: https://issues.apache.org/jira/browse/SOLR-1454
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: CentOS 5.2
Reporter: kareem shahin


i'm getting the following

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Closed: (SOLR-1454) Error during auto-warming OutOfMemoryError

2009-09-23 Thread kareem shahin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kareem shahin closed SOLR-1454.
---

Resolution: Incomplete

 Error during auto-warming OutOfMemoryError
 --

 Key: SOLR-1454
 URL: https://issues.apache.org/jira/browse/SOLR-1454
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: CentOS 5.2
Reporter: kareem shahin

 i'm getting the following

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1455) Error during auto-warming OutOfMemoryError

2009-09-23 Thread kareem shahin (JIRA)
Error during auto-warming OutOfMemoryError
--

 Key: SOLR-1455
 URL: https://issues.apache.org/jira/browse/SOLR-1455
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: CentOS 5.2,  8gb ram
Reporter: kareem shahin


lately solr has been locking up for me

i keep getting 

WARNING: No lockType configured for /home/solr/indices/lunar/data/index/ 
assuming 'simple'
Sep 23, 2009 4:47:34 AM org.apache.coyote.http11.Http11Processor process
SEVERE: Error processing request
java.lang.OutOfMemoryError: GC overhead limit exceeded
Sep 23, 2009 4:48:57 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: GC overhead limit exceeded

Sep 23, 2009 5:20:59 AM org.apache.solr.common.SolrException log
SEVERE: Error during auto-warming of 
key:org.apache.solr.search.queryresult...@cb3e3ef6:java.lang.OutOfMemoryError: 
GC overhead limit exceeded

Sep 23, 2009 5:20:59 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: GC overhead limit exceeded

Sep 23, 2009 5:20:59 AM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming result for searc...@2d59e0ae main

queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=49,evictions=0,size=49,warmupTime=49100153,cumulative_lookups=60,cumulative_hits=5,cumulative_hitratio=0.08,cumulative_inserts=55,cumulative_evictions=0}
Sep 23, 2009 5:20:59 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead 
limit exceeded
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:960)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:368)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:77)
at 
org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:226)
at 
org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded


and after a while the lock is never removed:
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed 
out: 
SimpleFSLock@/home/solr/indices/lunar/data/index/lucene-1cf222e5ad584eae35889657453815d8-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
at 
org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:122)
at 
org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:167)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59)
at 
org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:196)
at 
org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
at 

Re: [Solr Wiki] Update of LocalParams by naomidushay

2009-09-23 Thread Yonik Seeley
On Wed, Sep 23, 2009 at 5:54 PM, Apache Wiki wikidi...@apache.org wrote:
 +  `q={!type=dismax qf=myfield yourfield}solr rocks`
 + will give an error because yourfield is expected to be a key=value pair.

This doesn't give an error for the reason you think it does :-)
A single word like this is shorthand for type=yourfield... so
q={!type=dismax qf=myfield lucene}solr rocks
would not throw an error and parse to a lucene query.
This shorthand notation is what allows
{!dismax} to mean the same thing as {!type=dismax}
It's undocumented that it actually works if it's not the first
param... perhaps we should limit that in the future.

 + example of backslash escaping:
 +  `q={!type=dismax qf=title}2 \+ 2`

This isn't an example of local params backslash escaping.
There is no escaping at all in the value (that's considered a feature).

-Yonik
http://www.lucidimagination.com


  === Query type short-form ===
 - If a localparam value appears without a name, it is given the implicit name 
 of type.  This allows short-form representation for the type of query 
 parser to use when parsing a query string.  Thus
 + If a LocalParams value appears without a name, it is given the implicit 
 name of type.  This allows short-form representation for the type of query 
 parser to use when parsing a query string.  Thus
 - {{{q={!dismax qf=myfield}solr rocks
 +  `q={!dismax qf=myfield}solr rocks`
 - }}} is equivalent to
 + is equivalent to
 - {{{q={!type=dismax qf=myfield}solr rocks
 +  `q={!type=dismax qf=myfield}solr rocks`
 - }}}

  === Parameter value ===
  A special key of v within local parameters is an alternate way to specify 
 the value of that parameter.
 - {{{q={!dismax qf=myfield}solr rocks
 +  `q={!dismax qf=myfield}solr rocks`
 - }}} is equivalent to
 + is equivalent to
 - {{{q={!type=dismax qf=myfield v='solr rocks'}
 +  `q={!type=dismax qf=myfield v='solr rocks'}`
 - }}}

  === Parameter dereferencing ===
  Parameter dereferencing or indirection allows one to use the value of 
 another argument rather than specifying it directly.  This can be used to 
 simplify queries, decouple user input from query parameters, or decouple 
 front-end GUI parameters from defaults set in solrconfig.xml.
 - {{{q={!dismax qf=myfield}solr rocks
 +  `q={!dismax qf=myfield}solr rocks`
 - }}} is equivalent to
 + is equivalent to
 - {{{q={!type=dismax qf=myfield v=$qq}qq=solr rocks
 +  `q={!type=dismax qf=myfield v=$qq}qq=solr rocks`
 - }}}




[jira] Created: (SOLR-1456) DIH - Provide periodic autoupdate capability. Allow ability to specify an interval for autoupdate in DIH config.

2009-09-23 Thread Manish Kalbande (JIRA)
DIH - Provide periodic autoupdate capability. Allow ability to specify an 
interval for autoupdate in DIH config.


 Key: SOLR-1456
 URL: https://issues.apache.org/jira/browse/SOLR-1456
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Manish Kalbande
 Fix For: 1.4


Currently DIH supports full import and delta import.

If I want to set up create an index from data in DB and want to keep my index 
uptodate, DIH does not provide any ability to periodically update the index by 
running the delta query automatically.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1457) Deploy shards from HDFS into local cores

2009-09-23 Thread Jason Rutherglen (JIRA)
Deploy shards from HDFS into local cores


 Key: SOLR-1457
 URL: https://issues.apache.org/jira/browse/SOLR-1457
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 1.5


This will be an interim utility (until Katta integration
SOLR-1395 becomes more functional) that allows deployment of
multiple sharded indexes in HDFS onto a local Solr server. To
make it easy, I'd run it remotely via SSH so that one doesn't
have to manually execute it per machine.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Artem Russakovskii (JIRA)
Java Replication error: NullPointerException SEVERE: SnapPull failed on 
2009-09-22 nightly
--

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
8GB RAM
Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
problem


Host a: master
Host b: slave

Multiple single core Solr instances, using JNDI.

Java replication
Reporter: Artem Russakovskii


After finally figuring out the new Java based replication, we have started both 
the slave and the master and issued optimize to all master Solr instances. This 
triggered some replication to go through just fine, but it looks like some of 
it is failing.

Here's what I'm getting in the slave logs, repeatedly for each shard:

{code} 
SEVERE: SnapPull failed 
java.lang.NullPointerException
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
{code} 

If I issue an optimize again on the master to one of the shards, it then 
triggers a replication and replicates OK. I have a feeling that these SnapPull 
failures appear later on but right now I don't have enough to form a pattern.

Here's replication.properties on one of the failed slave instances.
{code}
cat data/replication.properties 
#Replication details
#Wed Sep 23 19:35:30 PDT 2009
replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
previousCycleTimeInSeconds=0
timesFailed=113
indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
indexReplicatedAt=1253759730020
replicationFailedAt=1253759730020
lastCycleBytesDownloaded=0
timesIndexReplicated=113
{code}

and another
{code}
cat data/replication.properties 
#Replication details
#Wed Sep 23 18:42:01 PDT 2009
replicationFailedAtList=1253756490034,1253756460169
previousCycleTimeInSeconds=1
timesFailed=2
indexReplicatedAtList=1253756521284,1253756490034,1253756460169
indexReplicatedAt=1253756521284
replicationFailedAt=1253756490034
lastCycleBytesDownloaded=22932293
timesIndexReplicated=3
{code}


Some relevant configs:
In solrconfig.xml:
{code}
!-- For docs see http://wiki.apache.org/solr/SolrReplication --
  requestHandler name=/replication class=solr.ReplicationHandler 
lst name=master
str name=enable${enable.master:false}/str
str name=replicateAfteroptimize/str
str name=backupAfteroptimize/str
str name=commitReserveDuration00:00:20/str
/lst
lst name=slave
str name=enable${enable.slave:false}/str

!-- url of master, from properties file --
str name=masterUrl${master.url}/str

!-- how often to check master --
str name=pollInterval00:00:30/str
/lst
  /requestHandler
{code}

The slave then has this in solrcore.properties:
{code}
enable.slave=true
master.url=URLOFMASTER/replication
{code}

and the master has
{code}
enable.master=true
{code}

I'd be glad to provide more details but I'm not sure what else I can do.  
SOLR-926 may be relevant.

Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1335) load core properties from a properties file

2009-09-23 Thread Artem Russakovskii (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758985#action_12758985
 ] 

Artem Russakovskii commented on SOLR-1335:
--

Paul, thank you. How funny - I was using nightly build from 8/25/09 and looks 
like the final commit was made on 8/26/09. Doh!

So I verified $solr_home/conf/solrcore.properties as working, and it's good 
enough for us, but it'd be ideal if this location can be specified in 
solrconfig.xml, for example to be set to '$solr_home/..'. However,

a) I'm not sure this is supported right now
b) I think we came to a conclusion in another ticket that there's no value 
$solr_home that one can refer to from within solrconfig.xml 
(http://www.nabble.com/Solr,-JNDI-config,-dataDir,-and-solr-home-problem-td25286277.html
 and SOLR-1414), although it looks like you may have fixed it - is it 
accessible by ${solr.core.instanceDir}  now?

 load core properties from a properties file
 ---

 Key: SOLR-1335
 URL: https://issues.apache.org/jira/browse/SOLR-1335
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 1.4

 Attachments: SOLR-1335.patch, SOLR-1335.patch, SOLR-1335.patch, 
 SOLR-1335.patch


 There are  few ways of loading properties in runtime,
 # using env property using in the command line
 # if you use a multicore drop it in the solr.xml
 if not , the only way is to  keep separate solrconfig.xml for each instance.  
 #1 is error prone if the user fails to start with the correct system 
 property. 
 In our case we have four different configurations for the same deployment  . 
 And we have to disable replication of solrconfig.xml. 
 It would be nice if I can distribute four properties file so that our ops can 
 drop  the right one and start Solr. Or it is possible for the operations to 
 edit a properties file  but it is risky to edit solrconfig.xml if he does not 
 understand solr
 I propose a properties file in the instancedir as solrcore.properties . If 
 present would be loaded and added as core specific properties.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1414) implicit core properties are not set for single core

2009-09-23 Thread Artem Russakovskii (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758986#action_12758986
 ] 

Artem Russakovskii commented on SOLR-1414:
--

So can one access the solr.home property by using ${solr.core.instanceDir} ? 
Can you clarify please? Perhaps some wiki pages should be updated to reflect 
this as well?

 implicit core properties are not set for single core
 

 Key: SOLR-1414
 URL: https://issues.apache.org/jira/browse/SOLR-1414
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: 1.4

 Attachments: SOLR-1414.patch


 implicit core properties such as solr.core.instanceDir, solr.core.configName 
 are not set for single core

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1301) Solr + Hadoop

2009-09-23 Thread Jason Rutherglen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Rutherglen updated SOLR-1301:
---

Attachment: commons-logging-api-1.0.4.jar
commons-logging-1.0.4.jar
SOLR-1301.patch

Here's a new patch, with log jar dependencies.  

The heartbeat is improved, and there's a queuing mechanism that can result in 
faster execution times.

 Solr + Hadoop
 -

 Key: SOLR-1301
 URL: https://issues.apache.org/jira/browse/SOLR-1301
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Andrzej Bialecki 
 Attachments: commons-logging-1.0.4.jar, 
 commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, hadoop.patch, 
 README.txt, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SolrRecordWriter.java


 This patch contains  a contrib module that provides distributed indexing 
 (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
 twofold:
 * provide an API that is familiar to Hadoop developers, i.e. that of 
 OutputFormat
 * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
 SolrOutputFormat consumes data produced by reduce tasks directly, without 
 storing it in intermediate files. Furthermore, by using an 
 EmbeddedSolrServer, the indexing task is split into as many parts as there 
 are reducers, and the data to be indexed is not sent over the network.
 Design
 --
 Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
 which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
 instantiates an EmbeddedSolrServer, and it also instantiates an 
 implementation of SolrDocumentConverter, which is responsible for turning 
 Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
 batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
 task completes, and the OutputFormat is closed, SolrRecordWriter calls 
 commit() and optimize() on the EmbeddedSolrServer.
 The API provides facilities to specify an arbitrary existing solr.home 
 directory, from which the conf/ and lib/ files will be taken.
 This process results in the creation of as many partial Solr home directories 
 as there were reduce tasks. The output shards are placed in the output 
 directory on the default filesystem (e.g. HDFS). Such part-N directories 
 can be used to run N shard servers. Additionally, users can specify the 
 number of reduce tasks, in particular 1 reduce task, in which case the output 
 will consist of a single shard.
 An example application is provided that processes large CSV files and uses 
 this API. It uses a custom CSV processing to avoid (de)serialization overhead.
 This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
 issue, you should put it in contrib/hadoop/lib.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1335) load core properties from a properties file

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758993#action_12758993
 ] 

Noble Paul commented on SOLR-1335:
--

solr.home is not technically same as solr.core.instanceDir . it can be 
different in a multicore setup

 load core properties from a properties file
 ---

 Key: SOLR-1335
 URL: https://issues.apache.org/jira/browse/SOLR-1335
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 1.4

 Attachments: SOLR-1335.patch, SOLR-1335.patch, SOLR-1335.patch, 
 SOLR-1335.patch


 There are  few ways of loading properties in runtime,
 # using env property using in the command line
 # if you use a multicore drop it in the solr.xml
 if not , the only way is to  keep separate solrconfig.xml for each instance.  
 #1 is error prone if the user fails to start with the correct system 
 property. 
 In our case we have four different configurations for the same deployment  . 
 And we have to disable replication of solrconfig.xml. 
 It would be nice if I can distribute four properties file so that our ops can 
 drop  the right one and start Solr. Or it is possible for the operations to 
 edit a properties file  but it is risky to edit solrconfig.xml if he does not 
 understand solr
 I propose a properties file in the instancedir as solrcore.properties . If 
 present would be loaded and added as core specific properties.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1456) DIH - Provide periodic autoupdate capability. Allow ability to specify an interval for autoupdate in DIH config.

2009-09-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-1456:
-

Fix Version/s: (was: 1.4)
   1.5

no new feature requests for Solr1.4 we are closer to a release

 DIH - Provide periodic autoupdate capability. Allow ability to specify an 
 interval for autoupdate in DIH config.
 

 Key: SOLR-1456
 URL: https://issues.apache.org/jira/browse/SOLR-1456
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Manish Kalbande
 Fix For: 1.5


 Currently DIH supports full import and delta import.
 If I want to set up create an index from data in DB and want to keep my index 
 uptodate, DIH does not provide any ability to periodically update the index 
 by running the delta query automatically.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-1458:


Assignee: Noble Paul

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul

 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758998#action_12758998
 ] 

Noble Paul commented on SOLR-1458:
--

can you hit the master with the filelist command  and see the output. 

http://wiki.apache.org/solr/SolrReplication#line-155

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul

 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this 

[jira] Updated: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-1458:
-

Attachment: SOLR-1458.patch

log the error if no file list available

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul
 Attachments: SOLR-1458.patch


 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12759001#action_12759001
 ] 

Yonik Seeley commented on SOLR-1458:


Why would there not be a filelist? Any idea what the underlying error is?

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul
 Attachments: SOLR-1458.patch


 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to 

[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12759003#action_12759003
 ] 

Noble Paul commented on SOLR-1458:
--

isn't it possible that by the time filelist is invoked the indexcommit of the 
version is gone ? In that case no files would be available

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul
 Attachments: SOLR-1458.patch


 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is 

[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12759004#action_12759004
 ] 

Noble Paul commented on SOLR-1458:
--

and we don't reserve the commit after an indexversion command . should we 
reserve the commitpoint if after an indexversion command?

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul
 Attachments: SOLR-1458.patch


 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically 

[jira] Updated: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-1458:
-

Attachment: SOLR-1458.patch

reserve commits in indexversion command

 Java Replication error: NullPointerException SEVERE: SnapPull failed on 
 2009-09-22 nightly
 --

 Key: SOLR-1458
 URL: https://issues.apache.org/jira/browse/SOLR-1458
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 1.4
 Environment: CentOS x64
 8GB RAM
 Tomcat, running with 7G max memory; memory usage is 2GB, so it's not the 
 problem
 Host a: master
 Host b: slave
 Multiple single core Solr instances, using JNDI.
 Java replication
Reporter: Artem Russakovskii
Assignee: Noble Paul
 Attachments: SOLR-1458.patch, SOLR-1458.patch


 After finally figuring out the new Java based replication, we have started 
 both the slave and the master and issued optimize to all master Solr 
 instances. This triggered some replication to go through just fine, but it 
 looks like some of it is failing.
 Here's what I'm getting in the slave logs, repeatedly for each shard:
 {code} 
 SEVERE: SnapPull failed 
 java.lang.NullPointerException
 at 
 org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
 at 
 org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
 at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 {code} 
 If I issue an optimize again on the master to one of the shards, it then 
 triggers a replication and replicates OK. I have a feeling that these 
 SnapPull failures appear later on but right now I don't have enough to form a 
 pattern.
 Here's replication.properties on one of the failed slave instances.
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 19:35:30 PDT 2009
 replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 previousCycleTimeInSeconds=0
 timesFailed=113
 indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
 indexReplicatedAt=1253759730020
 replicationFailedAt=1253759730020
 lastCycleBytesDownloaded=0
 timesIndexReplicated=113
 {code}
 and another
 {code}
 cat data/replication.properties 
 #Replication details
 #Wed Sep 23 18:42:01 PDT 2009
 replicationFailedAtList=1253756490034,1253756460169
 previousCycleTimeInSeconds=1
 timesFailed=2
 indexReplicatedAtList=1253756521284,1253756490034,1253756460169
 indexReplicatedAt=1253756521284
 replicationFailedAt=1253756490034
 lastCycleBytesDownloaded=22932293
 timesIndexReplicated=3
 {code}
 Some relevant configs:
 In solrconfig.xml:
 {code}
 !-- For docs see http://wiki.apache.org/solr/SolrReplication --
   requestHandler name=/replication class=solr.ReplicationHandler 
 lst name=master
 str name=enable${enable.master:false}/str
 str name=replicateAfteroptimize/str
 str name=backupAfteroptimize/str
 str name=commitReserveDuration00:00:20/str
 /lst
 lst name=slave
 str name=enable${enable.slave:false}/str
 !-- url of master, from properties file --
 str name=masterUrl${master.url}/str
 !-- how often to check master --
 str name=pollInterval00:00:30/str
 /lst
   /requestHandler
 {code}
 The slave then has this in solrcore.properties:
 {code}
 enable.slave=true
 master.url=URLOFMASTER/replication
 {code}
 and the master has
 {code}
 enable.master=true
 {code}
 I'd be glad to provide more details but I'm not sure what else I can do.  
 SOLR-926 may be relevant.
 Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.