Hudson build is back to normal : Lucene-Solr-tests-only-trunk #180
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/180/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Continuous Integration builds for branches
I bring this on the list from a quick discussion on IRC with uwe. Since we have these very valuable tests for Solr/Lucene trunk I would like to propose the same thing for the big feature branches like realtime and docvalues. We usually move features to trunk to let them bake in a bit and lets random test run to catch problems early. I think it would make lots of sense to use free hudson slots to run random tests for those features too. According to uwe this should only be a different SVN URL in hudson and a copy job though. I am not sure if we should run solr test too since the are frequently failing but the lucene ones would make lots of sense to me. simon - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] Commented: (SOLR-1311) pseudo-field-collapsing
On Sat, Oct 16, 2010 at 11:31 PM, Lance Norskog goks...@gmail.com wrote: The Field Collapsing patch is dead. Search Grouping is a different suite of techniques that the committers are willing to commit. Note that the Field Collapsing issue has been open for 3+ years and nothing was ever committed: the Solr committers who care all hate it. Lance, what you are saying might be true or it may not, but in either case it's no way judge a design and / or work done over 3+ years. Folks have made their proposals with certain requirements in mind and with good faith. If the issue has major problems or downsides over the other approach, point them out and give folks an idea of the differences instead of judging work people have done without giving any good reasons. I am sure that you didn't meant to insult anybody though but IMO you phrasing was very unfortunate in that case. Lets keep things constructive here! simon 8G is not a big index. 450G is a big index. 1.5 billion docs is a big index. The greybeards won't touch a structural change that doesn't work for the wide range of use cases. The Field Collapsing patches never scaled. On Fri, Oct 15, 2010 at 5:42 AM, Marc Sturlese (JIRA) j...@apache.org wrote: [ https://issues.apache.org/jira/browse/SOLR-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921328#action_12921328 ] Marc Sturlese commented on SOLR-1311: - Well I said it can not be integrated as a plugin because it hacks DocListAndSetNC and DocListNC. This 2 functions just can be altered altering the SolrIndexSearcher.java class. The pseudo-field-collapse sort is not included in the current field collapsing but current field collapsing seems to perform much better that it use to (I don't think as good as this patch, but the current feature is much more complete than my patch). I supose I can close it. pseudo-field-collapsing --- Key: SOLR-1311 URL: https://issues.apache.org/jira/browse/SOLR-1311 Project: Solr Issue Type: New Feature Components: search Affects Versions: 1.4 Reporter: Marc Sturlese Fix For: Next Attachments: SOLR-1311-pseudo-field-collapsing.patch I am trying to develope a new way of doing field collapsing based on the adjacent field collapsing algorithm. I have started developing it beacuse I am experiencing performance problems with the field collapsing patch with big index (8G). The algorith does adjacent-pseudo-field collapsing. It does collapsing on the first X documents. Instead of making the collapsed docs disapear, the algorith will send them to a given position of the relevance results list. The reason I just do collapsing in the first X documents is that if I have for example 60 results and I am showing 10 results per page, I really don't need to do collapsing in the page 3 or even not in the 3000. Doing this I am noticing dramatically better performance. The problem is I couldn't find a way to plug the algorithm as a component and keep good performance. I had to hack few classes in SolrIndexSearcher.java This patch is just experimental and for testing purposes. In case someone finds it interesting would be good do find a way to integrate it in a better way than it is at the moment. Advices are more than welcome. Functionality: In solrconfig.xml we specify the pseudo-collapsing parameters: str name=plus.considerMoreDocstrue/str str name=plus.considerHowMany3000/str str name=plus.considerFieldname/str (at the moment there's no threshold and other parameters that exist in the current collapse-field patch) plus.considerMoreDocs one enables pseudo-collapsing plus.considerHowMany sets the number of resultant documents in wich we want to apply the algorithm plus.considerField is the field to do pseudo-collapsing If the number of results is lower than plus.considerHowMany the algorithm will be applyed to all the results. Let's say there is a query with 60 results and we've set considerHowMany to 3000 (and we already have the docs sorted by relevance). What adjacent-pseudo-collapse does is, if the 2nd doc has to be collapsed it will be sent to the pos 2999 of the relevance results array. If the 3th has to be collpased too will go to the position 2998 and successively like this. The algorithm is not applyed when a sortspec is set or plus.considerMoreDocs is set to false. It neighter is applyed when using MoreLikeThisRequestHanlder. Example with a query of 9 results: Results sorted by relevance without pseudo-collapse-algorithm: doc1 - collapse_field_value 3 doc2 - collapse_field_value 3 doc3 - collapse_field_value 4 doc4 - collapse_field_value 7 doc5 - collapse_field_value 6 doc6 - collapse_field_value 6 doc7 - collapse_field_value 5 doc8 -
Build failed in Hudson: Solr-trunk #1284
See https://hudson.apache.org/hudson/job/Solr-trunk/1284/changes Changes: [uschindler] fix compile failure in Ryan's commit (LUCENE-2671) [ryan] LUCENE-2671 -- bind DocTermsCreator and DocTermsIndexCreator to the raw class [yonik] tests: fix resource leak [yonik] tests: fix resource leak [uschindler] Reenable scripting test, only hudson uses OpenJDK which has no Rhino engine, so add assume. [yonik] tests: use solr's log to skip logging expected exceptions [yonik] SOLR-2010: fix resource leak in spellcheck collator [uschindler] LUCENE-2708: when a test Assume fails, display information, improved one [yonik] formatting/indenting changes only [rmuir] revert r1023266: maybe openjdk doesnt have scripts this needs... perhaps use Assume later in case you have the support [rmuir] remove @Ignores for these tests because trunk is on java 6 [rmuir] add reason why this test is @Ignore [rmuir] add reasons why the tests are @Ignored so you see them from the console [rmuir] revert accidental commit [rmuir] LUCENE-2710: reproduce-with on test failure isnt right if you manually override things [rmuir] add @Ignore to broken tests for now [rmuir] add assume for known turkish bug [rmuir] hack the hack, for hudson since it has no ipv6: protocol not supported immediate failure -- [...truncated 7615 lines...] [junit] - --- [junit] Testsuite: org.apache.solr.handler.component.TermVectorComponentTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.657 sec [junit] [junit] Testsuite: org.apache.solr.handler.component.TermsComponentTest [junit] Tests run: 13, Failures: 0, Errors: 0, Time elapsed: 0.743 sec [junit] [junit] Testsuite: org.apache.solr.highlight.FastVectorHighlighterTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.493 sec [junit] [junit] Testsuite: org.apache.solr.highlight.HighlighterConfigTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.607 sec [junit] [junit] Testsuite: org.apache.solr.highlight.HighlighterTest [junit] Tests run: 23, Failures: 0, Errors: 0, Time elapsed: 2.389 sec [junit] [junit] - Standard Error - [junit] 17.10.2010 04:10:36 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SCHWERWIEGEND: Master at: http://localhost:41162/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.request.JSONWriterTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.673 sec [junit] [junit] Testsuite: org.apache.solr.request.SimpleFacetsTest [junit] Tests run: 22, Failures: 0, Errors: 0, Time elapsed: 5.776 sec [junit] [junit] - Standard Error - [junit] 17/10/2010 08:10:41 AM org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] GRAVE: Master at: http://localhost:41162/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.request.TestBinaryResponseWriter [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.649 sec [junit] [junit] Testsuite: org.apache.solr.request.TestFaceting [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.498 sec [junit] [junit] - Standard Error - [junit] 2010-10-16 22:10:46 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SEVERE: Master at: http://localhost:41162/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.request.TestWriterPerf [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.785 sec [junit] [junit] Testsuite: org.apache.solr.response.TestCSVResponseWriter [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.656 sec [junit] [junit] Testsuite: org.apache.solr.schema.BadIndexSchemaTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.404 sec [junit] [junit] Testsuite: org.apache.solr.schema.CopyFieldTest [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.484 sec [junit] [junit] Testsuite: org.apache.solr.schema.DateFieldTest [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.018 sec [junit] [junit] - Standard Error - [junit] 2010-10-17 04.10.51 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SEVERE: Master at: http://localhost:41162/solr/replication is not
Build failed in Hudson: Lucene-Solr-tests-only-trunk #185
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/185/ -- [...truncated 8544 lines...] [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.653 sec [junit] [junit] Testsuite: org.apache.solr.update.processor.SignatureUpdateProcessorFactoryTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 3.572 sec [junit] [junit] Testsuite: org.apache.solr.util.TestUtils [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.013 sec [junit] [junit] Testsuite: org.apache.solr.schema.NumericFieldsTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.634 sec [junit] [junit] Testsuite: org.apache.solr.search.TestQueryUtils [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.985 sec [junit] [junit] Testsuite: org.apache.solr.cloud.ZkControllerTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 2.706 sec [junit] [junit] Testsuite: org.apache.solr.common.params.ModifiableSolrParamsTest [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.solr.common.util.TestNamedListCodec [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.298 sec [junit] [junit] Testsuite: org.apache.solr.search.function.distance.DistanceFunctionTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.791 sec [junit] [junit] Testsuite: org.apache.solr.servlet.DirectSolrConnectionTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.351 sec [junit] [junit] Testsuite: org.apache.solr.spelling.SpellPossibilityIteratorTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.009 sec [junit] [junit] Testsuite: org.apache.solr.core.TestQuerySenderListener [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.466 sec [junit] [junit] Testsuite: org.apache.solr.core.TestSolrDeletionPolicy2 [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.709 sec [junit] [junit] Testsuite: org.apache.solr.request.TestFaceting [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.522 sec [junit] [junit] Testsuite: org.apache.solr.schema.TestBinaryField [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.34 sec [junit] [junit] Testsuite: org.apache.solr.handler.TestCSVLoader [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 1.608 sec [junit] [junit] Testsuite: org.apache.solr.search.TestDocSet [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.46 sec [junit] [junit] Testsuite: org.apache.solr.update.DirectUpdateHandlerTest [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 2.5 sec [junit] [junit] Testsuite: org.apache.solr.handler.XmlUpdateRequestHandlerTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.482 sec [junit] [junit] Testsuite: org.apache.solr.search.TestSolrQueryParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.755 sec [junit] [junit] Testsuite: org.apache.solr.velocity.VelocityResponseWriterTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.93 sec [junit] [junit] Testsuite: org.apache.solr.handler.component.DebugComponentTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.641 sec [junit] [junit] Testsuite: org.apache.solr.spelling.FileBasedSpellCheckerTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.88 sec [junit] [junit] Testsuite: org.apache.solr.schema.IndexSchemaTest [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.676 sec [junit] [junit] Testsuite: org.apache.solr.schema.PolyFieldTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.703 sec [junit] [junit] Testsuite: org.apache.solr.search.TestFastLRUCache [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.021 sec [junit] [junit] Testsuite: org.apache.solr.update.TestIndexingPerformance [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.808 sec [junit] [junit] Testsuite: org.apache.solr.util.DateMathParserTest [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.056 sec [junit] [junit] Testsuite: org.apache.solr.search.function.TestFunctionQuery [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 1.461 sec [junit] [junit] Testsuite: org.apache.solr.servlet.SolrRequestParserTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.53 sec [junit] [junit] Testsuite: org.apache.solr.spelling.suggest.SuggesterTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.153 sec [junit] [junit] Testsuite: org.apache.solr.update.DocumentBuilderTest [junit] Tests run: 3, Failures:
Hudson build is back to normal : Lucene-Solr-tests-only-trunk #186
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/186/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-3.x #178
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/178/ -- [...truncated 9979 lines...] [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor$2.run(XPathEntityProcessor.java:403) [junit] Caused by: com.ctc.wstx.exc.WstxParsingException: Unexpected close tag /id; expected /node. [junit] at [row,col {unknown-source}]: [11,18] [junit] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:630) [junit] at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:461) [junit] at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3258) [junit] at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3200) [junit] at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2832) [junit] at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1019) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:275) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.handleStartElement(XPathRecordReader.java:340) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:304) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.handleStartElement(XPathRecordReader.java:340) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:304) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.access$200(XPathRecordReader.java:196) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader.streamRecords(XPathRecordReader.java:178) [junit] ... 1 more [junit] 10 17, 10 10:15:38 AM org.apache.solr.handler.dataimport.DocBuilder buildDocument [junit] SEVERE: Exception while processing: node document : SolrInputDocument[{}] [junit] org.apache.solr.handler.dataimport.DataImportHandlerException: Parsing failed for xml, url:test rows processed:2 last row: {id=2, desc=[test2], $forEach=/root/node} Processing Document # 1 [junit] at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:72) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.initQuery(XPathEntityProcessor.java:305) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.fetchNextRow(XPathEntityProcessor.java:201) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.nextRow(XPathEntityProcessor.java:181) [junit] at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:233) [junit] at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:579) [junit] at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:260) [junit] at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:184) [junit] at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:334) [junit] at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:392) [junit] at org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:203) [junit] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) [junit] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1322) [junit] at org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339) [junit] at org.apache.solr.util.TestHarness.query(TestHarness.java:322) [junit] at org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:81) [junit] at org.apache.solr.handler.dataimport.TestErrorHandling.testAbortOnError(TestErrorHandling.java:65) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit] at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) [junit] at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
Hudson build is back to normal : Lucene-Solr-tests-only-3.x #179
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/179/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-trunk #190
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/190/ -- [...truncated 9112 lines...] [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor$2.run(XPathEntityProcessor.java:403) [junit] Caused by: com.ctc.wstx.exc.WstxParsingException: Unexpected close tag /id; expected /node. [junit] at [row,col {unknown-source}]: [11,18] [junit] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:630) [junit] at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:461) [junit] at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3258) [junit] at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3200) [junit] at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2832) [junit] at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1019) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:275) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.handleStartElement(XPathRecordReader.java:340) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:304) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.handleStartElement(XPathRecordReader.java:340) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.parse(XPathRecordReader.java:304) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader$Node.access$200(XPathRecordReader.java:196) [junit] at org.apache.solr.handler.dataimport.XPathRecordReader.streamRecords(XPathRecordReader.java:178) [junit] ... 1 more [junit] 10-17-2010 10:01:00 PM org.apache.solr.handler.dataimport.DocBuilder buildDocument [junit] GRAVE: Exception while processing: node document : SolrInputDocument[{}] [junit] org.apache.solr.handler.dataimport.DataImportHandlerException: Parsing failed for xml, url:test rows processed:2 last row: {id=2, desc=[test2], $forEach=/root/node} Processing Document # 1 [junit] at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:72) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.initQuery(XPathEntityProcessor.java:305) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.fetchNextRow(XPathEntityProcessor.java:201) [junit] at org.apache.solr.handler.dataimport.XPathEntityProcessor.nextRow(XPathEntityProcessor.java:181) [junit] at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:233) [junit] at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:579) [junit] at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:260) [junit] at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:184) [junit] at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:334) [junit] at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:392) [junit] at org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:203) [junit] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) [junit] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1325) [junit] at org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:339) [junit] at org.apache.solr.util.TestHarness.query(TestHarness.java:322) [junit] at org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:81) [junit] at org.apache.solr.handler.dataimport.TestErrorHandling.testAbortOnError(TestErrorHandling.java:65) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit] at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) [junit] at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
Hudson build is back to normal : Lucene-Solr-tests-only-trunk #191
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/191/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-3.x #183
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/183/ -- [...truncated 3839 lines...] [junit] [junit] Testsuite: org.apache.lucene.search.TestDateFilter [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.02 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestSpanFirstQuery [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.009 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestFieldCacheTermsFilter [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.015 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestDirectory [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.025 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestFilteredSearch [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestArrayUtil [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.021 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestRamUsageEstimator [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.009 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestBoolean2 [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 9.587 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestBooleanScorer [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.022 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestBufferedIndexInput [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 3.164 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestLock [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestAttributeSource [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.02 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestVirtualMethod [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.012 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestCustomSearcherSort [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.188 sec [junit] [junit] Testsuite: org.apache.lucene.search.function.TestCustomScoreQuery [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 6.491 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestMultiValuedNumericRangeQuery [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 11.013 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestParallelMultiSearcher [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.14 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestQueryTermVector [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.01 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestSloppyPhraseQuery [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.492 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestTermScorer [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.011 sec [junit] [junit] Testsuite: org.apache.lucene.search.function.TestOrdValues [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.185 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestSpans [junit] Tests run: 25, Failures: 0, Errors: 0, Time elapsed: 0.771 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestBasics [junit] Tests run: 23, Failures: 0, Errors: 0, Time elapsed: 8.413 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestRAMDirectory [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.191 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestSpanExplanationsOfNonMatches [junit] Tests run: 31, Failures: 0, Errors: 0, Time elapsed: 0.086 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestNumericRangeQuery64 [junit] Tests run: 33, Failures: 0, Errors: 0, Time elapsed: 33.497 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestPositionIncrement [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.052 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestLockFactory [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 4.061 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestFieldCacheSanityChecker [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.39 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestSetOnce [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.018 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestIndexWriter [junit] Tests run: 116, Failures: 0, Errors: 0, Time elapsed: 43.494 sec [junit] [junit] Testsuite:
[jira] Commented: (SOLR-2168) Velocity facet output for facet missing
[ https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921831#action_12921831 ] Erik Hatcher commented on SOLR-2168: I've taken the patch idea and gave it a slight modification. In my effort to learn git, I've committed it to my facet_missing branch here: http://github.com/erikhatcher/lucene-solr/commit/70197cca24129141081d83ea422515dc69287e73 (I've still got to figure out how to push this change to Solr's trunk through git) I changed the code to so that the #set wasn't needed and also changed the label shown for missing facets to No $field.name to have angle brackets to avoid it looking like a regular value. These changes ok with you? (I also cleaned up the formatting of that file slightly) Out of curiosity, what's annoying about Velocity syntax? (for example, show me how it'd look in your template language of choice) Velocity facet output for facet missing --- Key: SOLR-2168 URL: https://issues.apache.org/jira/browse/SOLR-2168 Project: Solr Issue Type: Bug Components: Response Writers Affects Versions: 3.1 Reporter: Peter Wolanin Priority: Minor Attachments: SOLR-2168.patch If I add fact.missing to the facet params for a field, the Veolcity output has in the facet list: $facet.name (9220) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (SOLR-2168) Velocity facet output for facet missing
[ https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921833#action_12921833 ] Erik Hatcher commented on SOLR-2168: Also, in the patch I linked to in my previous comment I made it so that the missing facet isn't shown if the count is zero. Velocity facet output for facet missing --- Key: SOLR-2168 URL: https://issues.apache.org/jira/browse/SOLR-2168 Project: Solr Issue Type: Bug Components: Response Writers Affects Versions: 3.1 Reporter: Peter Wolanin Priority: Minor Attachments: SOLR-2168.patch If I add fact.missing to the facet params for a field, the Veolcity output has in the facet list: $facet.name (9220) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Hudson build is back to normal : Lucene-Solr-tests-only-3.x #184
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/184/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Continuous Integration builds for branches
On Sun, Oct 17, 2010 at 3:28 AM, Simon Willnauer simon.willna...@googlemail.com wrote: I bring this on the list from a quick discussion on IRC with uwe. Since we have these very valuable tests for Solr/Lucene trunk I would like to propose the same thing for the big feature branches like realtime and docvalues. We usually move features to trunk to let them bake in a bit and lets random test run to catch problems early. I have a few concerns: * If a branch is stable enough that you are worried about running 24/7 random tests to find little things, I think that means its time to merge it to trunk? * In a lot of cases, I think a branch is going to be a bit too unstable for this kinda thing, people are going to be ripping things apart, thats why they have a branch. Even trunk seems a bit too unstable with the tests at the moment (though its improving) * In a lot of cases a branch is a single person, maybe two working on it. If tests start bombing, maybe they are on vacation for two weeks and thats a lot of spam. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Updated: (LUCENE-2671) Add sort missing first/last ability to SortField and ValueComparator
[ https://issues.apache.org/jira/browse/LUCENE-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McKinley updated LUCENE-2671: -- Attachment: LUCENE-2671-caches.patch here is a new patch that applies cleanly. I could go either way for -- keep it as it is, vs drop the first layer generic map Add sort missing first/last ability to SortField and ValueComparator Key: LUCENE-2671 URL: https://issues.apache.org/jira/browse/LUCENE-2671 Project: Lucene - Java Issue Type: New Feature Components: Search Reporter: Ryan McKinley Assignee: Ryan McKinley Fix For: 4.0 Attachments: LUCENE-2671-caches.patch, LUCENE-2671-caches.patch, LUCENE-2671-SortMissingLast.patch, LUCENE-2671-suppress-unchecked.patch When SortField and ValueComparator use EntryCreators (from LUCENE-2649) they use a special sort value when the field is missing. This enables lucene to implement 'sort missing last' or 'sort missing first' for numeric values from the FieldCache. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (LUCENE-2118) Intermittent failure in TestIndexWriterMergePolicy.testMaxBufferedDocsChange
[ https://issues.apache.org/jira/browse/LUCENE-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921839#action_12921839 ] Robert Muir commented on LUCENE-2118: - Hudson hit this last night (brancH_3x) {noformat} [junit] Testsuite: org.apache.lucene.index.TestIndexWriterMergePolicy [junit] Testcase: testMaxBufferedDocsChange(org.apache.lucene.index.TestIndexWriterMergePolicy): FAILED [junit] maxMergeDocs=2147483647; numSegments=11; upperBound=10; mergeFactor=10; segs=_65:c5950 _5t:c10-_32 _5u:c10-_32 _5v:c10-_32 _5w:c10-_32 _5x:c10-_32 _5y:c10-_32 _5z:c10-_32 _60:c10-_32 _61:c10-_32 _62:c5-_32 _63:c5-_62 [junit] junit.framework.AssertionFailedError: maxMergeDocs=2147483647; numSegments=11; upperBound=10; mergeFactor=10; segs=_65:c5950 _5t:c10-_32 _5u:c10-_32 _5v:c10-_32 _5w:c10-_32 _5x:c10-_32 _5y:c10-_32 _5z:c10-_32 _60:c10-_32 _61:c10-_32 _62:c5-_32 _63:c5-_62 [junit] at org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:770) [junit] at org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:737) [junit] at org.apache.lucene.index.TestIndexWriterMergePolicy.checkInvariants(TestIndexWriterMergePolicy.java:243) [junit] at org.apache.lucene.index.TestIndexWriterMergePolicy.testMaxBufferedDocsChange(TestIndexWriterMergePolicy.java:169) [junit] [junit] [junit] Tests run: 6, Failures: 1, Errors: 0, Time elapsed: 2.31 sec [junit] [junit] - Standard Output --- [junit] NOTE: reproduce with: ant test -Dtestcase=TestIndexWriterMergePolicy -Dtestmethod=testMaxBufferedDocsChange -Dtests.seed=5199282654207860248:-4379090235199517829 -Dtests.multiplier=3 [junit] NOTE: test params are: locale=is_IS, timezone=Africa/Porto-Novo [junit] - --- [junit] TEST org.apache.lucene.index.TestIndexWriterMergePolicy FAILED {noformat} Intermittent failure in TestIndexWriterMergePolicy.testMaxBufferedDocsChange Key: LUCENE-2118 URL: https://issues.apache.org/jira/browse/LUCENE-2118 Project: Lucene - Java Issue Type: Bug Components: Index Affects Versions: 3.1 Reporter: Michael McCandless Priority: Minor Fix For: 3.1, 4.0 Last night's build failed from it: http://hudson.zones.apache.org/hudson/job/Lucene-trunk/1019/changes Here's the exc: {code} [junit] Testcase: testMaxBufferedDocsChange(org.apache.lucene.index.TestIndexWriterMergePolicy): FAILED [junit] maxMergeDocs=2147483647; numSegments=11; upperBound=10; mergeFactor=10 [junit] junit.framework.AssertionFailedError: maxMergeDocs=2147483647; numSegments=11; upperBound=10; mergeFactor=10 [junit] at org.apache.lucene.index.TestIndexWriterMergePolicy.checkInvariants(TestIndexWriterMergePolicy.java:234) [junit] at org.apache.lucene.index.TestIndexWriterMergePolicy.testMaxBufferedDocsChange(TestIndexWriterMergePolicy.java:164) [junit] at org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:208) {code} Test doesn't fail if I run on opensolaris nor os X machines... -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Updated: (SOLR-2162) TestLBHttpSolrServer.testSimple test failure
[ https://issues.apache.org/jira/browse/SOLR-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated SOLR-2162: -- Attachment: SOLR-2162_test.patch I think this might be a real bug in the retry handling of the CommonsHttpSolrServer: I suspect a reuse bug where in the finally block in the loop, releaseConnection() is called but the same option is later reused (a new Method is not created) attached is a patch to reproduce the failure. The test fails most of the time, but to be sure run like this: ant test-core -Dtestcase=TestLBHttpSolrServer -Dtestmethod=testReliability -Dtests.iter=10 it also creates other interesting wonderful exceptions such as: {noformat} [junit] Caused by: org.apache.solr.client.solrj.SolrServerException: org.apache.commons.httpclient.ProtocolException : Unable to parse header: vr et(..2 {noformat} TestLBHttpSolrServer.testSimple test failure Key: SOLR-2162 URL: https://issues.apache.org/jira/browse/SOLR-2162 Project: Solr Issue Type: Bug Components: Build Affects Versions: 3.1, 4.0 Environment: Hudson Reporter: Robert Muir Fix For: 3.1, 4.0 Attachments: SOLR-2162_test.patch TestLBHttpSolrServer failed, with this error: java.lang.IllegalStateException: Connection is not open Here is the stacktrace: {noformat} [junit] Testsuite: org.apache.solr.client.solrj.TestLBHttpSolrServer [junit] Testcase: testSimple(org.apache.solr.client.solrj.TestLBHttpSolrServer): Caused an ERROR [junit] java.lang.IllegalStateException: Connection is not open [junit] org.apache.solr.client.solrj.SolrServerException: java.lang.IllegalStateException: Connection is not open [junit] at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:409) [junit] at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:89) [junit] at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:119) [junit] at org.apache.solr.client.solrj.TestLBHttpSolrServer.testSimple(TestLBHttpSolrServer.java:120) [junit] at org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:795) [junit] at org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:768) [junit] Caused by: java.lang.IllegalStateException: Connection is not open [junit] at org.apache.commons.httpclient.HttpConnection.assertOpen(HttpConnection.java:1277) [junit] at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1115) [junit] at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973) [junit] at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735) [junit] at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098) [junit] at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398) [junit] at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) [junit] at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) [junit] at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323) [junit] at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:427) [junit] at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244) [junit] at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:395) [junit] [junit] [junit] Tests run: 2, Failures: 0, Errors: 1, Time elapsed: 10.319 sec [junit] [junit] - Standard Output --- [junit] NOTE: reproduce with: ant test -Dtestcase=TestLBHttpSolrServer -Dtestmethod=testSimple -Dtests.seed=-4677869093300417770:-570157895683071678 [junit] NOTE: test params are: codec=SimpleText, locale=hr_HR, timezone=Asia/Kathmandu [junit] - --- [junit] TEST org.apache.solr.client.solrj.TestLBHttpSolrServer FAILED {noformat} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-trunk #195
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/195/changes Changes: [rmuir] add another for now, since it appears to have TZ problems :( -- [...truncated 8052 lines...] [junit] [junit] Testsuite: org.apache.solr.search.TestDocSet [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.609 sec [junit] [junit] Testsuite: org.apache.solr.search.TestSolrQueryParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.55 sec [junit] [junit] Testsuite: org.apache.solr.spelling.FileBasedSpellCheckerTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.546 sec [junit] [junit] Testsuite: org.apache.solr.update.TestIndexingPerformance [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.745 sec [junit] [junit] Testsuite: org.apache.solr.util.DateMathParserTest [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.055 sec [junit] [junit] Testsuite: org.apache.solr.update.AutoCommitTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.334 sec [junit] [junit] Testsuite: org.apache.solr.util.SolrPluginUtilsTest [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.53 sec [junit] [junit] Testsuite: org.apache.solr.handler.TestReplicationHandler [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 27.086 sec [junit] [junit] - Standard Error - [junit] 2010-10-17 14:30:44 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] GRAVE: Master at: http://localhost:44203/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.handler.component.QueryElevationComponentTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.862 sec [junit] [junit] Testsuite: org.apache.solr.request.TestWriterPerf [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.707 sec [junit] [junit] Testsuite: org.apache.solr.schema.BadIndexSchemaTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.471 sec [junit] [junit] Testsuite: org.apache.solr.schema.LegacyDateFieldTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.014 sec [junit] [junit] Testsuite: org.apache.solr.search.TestRangeQuery [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 11.532 sec [junit] [junit] - Standard Error - [junit] 17/10/2010 14:30:48 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SEVERE: Master at: http://localhost:44213/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] 17/10/2010 14:30:49 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SEVERE: Master at: http://localhost:44203/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] 17/10/2010 14:30:54 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] SEVERE: Master at: http://localhost:44203/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.search.function.SortByFunctionTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.096 sec [junit] [junit] Testsuite: org.apache.solr.servlet.CacheHeaderTest [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.894 sec [junit] [junit] - Standard Error - [junit] 18-ott-2010 03:30:59 org.apache.solr.handler.SnapPuller fetchLatestIndex [junit] GRAVE: Master at: http://localhost:44203/solr/replication is not available. Index fetch failed. Exception: The host did not accept the connection within timeout of 5000 ms [junit] - --- [junit] Testsuite: org.apache.solr.spelling.SpellingQueryConverterTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.023 sec [junit] [junit] Testsuite: org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.583 sec [junit] [junit] Testsuite: org.apache.solr.util.PrimUtilsTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.032 sec [junit] [junit] Testsuite: org.apache.solr.TestDistributedSearch [junit] Testcase: testDistribSearch(org.apache.solr.TestDistributedSearch): FAILED [junit] Some threads threw uncaught exceptions! [junit]
Hudson build is back to normal : Lucene-Solr-tests-only-trunk #196
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/196/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Resolved: (LUCENE-2703) multitermquery scoring differences between 3x and trunk
[ https://issues.apache.org/jira/browse/LUCENE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-2703. - Resolution: Fixed Uwe's assert in LUCENE-2690 is better than any test i could add here... it even found a bug in another TermEnum impl. multitermquery scoring differences between 3x and trunk --- Key: LUCENE-2703 URL: https://issues.apache.org/jira/browse/LUCENE-2703 Project: Lucene - Java Issue Type: Bug Affects Versions: 4.0 Reporter: Robert Muir Fix For: 4.0 Attachments: LUCENE-2703.patch, LUCENE-2703_test.patch try this patch with a test, that applies clean to both 3x and trunk, but fails on trunk. if you modify the test-data-generator to use TopTerms*BoostOnly* rewrite, then it acts like TestFuzzyQuery2, and passes. So the problem is in TopTermsScoringBooleanRewrite, or BooleanQuery, or somewhere else. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (LUCENE-2703) multitermquery scoring differences between 3x and trunk
[ https://issues.apache.org/jira/browse/LUCENE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921851#action_12921851 ] Uwe Schindler commented on LUCENE-2703: --- Thanks! :-) multitermquery scoring differences between 3x and trunk --- Key: LUCENE-2703 URL: https://issues.apache.org/jira/browse/LUCENE-2703 Project: Lucene - Java Issue Type: Bug Affects Versions: 4.0 Reporter: Robert Muir Fix For: 4.0 Attachments: LUCENE-2703.patch, LUCENE-2703_test.patch try this patch with a test, that applies clean to both 3x and trunk, but fails on trunk. if you modify the test-data-generator to use TopTerms*BoostOnly* rewrite, then it acts like TestFuzzyQuery2, and passes. So the problem is in TopTermsScoringBooleanRewrite, or BooleanQuery, or somewhere else. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Issue Comment Edited: (SOLR-2168) Velocity facet output for facet missing
[ https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921852#action_12921852 ] Peter Wolanin edited comment on SOLR-2168 at 10/17/10 10:25 AM: Those all sound like good changes. In terms of templating -I'd find something like erb, or PHP, or jsp much easier, and I imagine many more people are failiar with those. So far, I feel like it's hard to understand in velocity how variables and control structures are distinguished from the output, and it's not clear that it's a real template in terms of the way e.g. white space is handled or not. This is especially true in the case of macro output, where is seems like e.g. the carriage returns and spaces I'd naturally include in control structures to make them readable become part of the output. The variable handling is also weird, that I need to use #set() for actual assignment? In terms of readablilty, loo for example, at this bit: {code} lia href=#url_for_home#lensfq=$esc.url( {code} the fq= is output in the middle of a series of macro and function calls but nothing visually distinguishes them. Can I define new functions instead of macros? If a macro call could be written as #{url_for_home} it would provide more visual separation. I notice in the patch you have: {code} -${field.name}:[* TO *] {code} Looks like the function call can be written like this? {code} ${esc.url(-${field.name}:[* TO *])} {code} was (Author: pwolanin): Those all sound like good changes. In terms of templating -I'd find something like erb, or PHP, or jsp much easier, and I imagine many more people are failiar with those. So far, I feel like it's hard to understand in velocity how variables and control structures are distinguished from the output, and it's not clear that it's a real template in terms of the way e.g. white space is handled or not. This is especially try in the case of macro output, where is seems like e.g. the carriae returns and spaces I'd naturally include in control structures to make them readable become part of the output. The variable handling is also weird, that I need to use #set() for actual assignment? In terms of readablilty, loo for example, at this bit: {code} lia href=#url_for_home#lensfq=$esc.url( {code} the fq= is output in the middle of a series of macro and function calls but nothing visually distinguishes them. Can I define new functions instead of macros? If a macro call could be written as #{url_for_home} it would provide more visual separation. I notice in the patch you have: {code} -${field.name}:[* TO *] {code} Looks like the function call can be written like this? {code} ${esc.url(-${field.name}:[* TO *])} {code} Velocity facet output for facet missing --- Key: SOLR-2168 URL: https://issues.apache.org/jira/browse/SOLR-2168 Project: Solr Issue Type: Bug Components: Response Writers Affects Versions: 3.1 Reporter: Peter Wolanin Priority: Minor Attachments: SOLR-2168.patch If I add fact.missing to the facet params for a field, the Veolcity output has in the facet list: $facet.name (9220) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (SOLR-2168) Velocity facet output for facet missing
[ https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921854#action_12921854 ] Yonik Seeley commented on SOLR-2168: bq. In my effort to learn git, I've committed it to my facet_missing branch here Hmmm, I'm liking this already... Nice and easy review of patches (including commenting), and you can get just the patch, or the compete file, or the complete tree. Velocity facet output for facet missing --- Key: SOLR-2168 URL: https://issues.apache.org/jira/browse/SOLR-2168 Project: Solr Issue Type: Bug Components: Response Writers Affects Versions: 3.1 Reporter: Peter Wolanin Priority: Minor Attachments: SOLR-2168.patch If I add fact.missing to the facet params for a field, the Veolcity output has in the facet list: $facet.name (9220) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (SOLR-2168) Velocity facet output for facet missing
[ https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921855#action_12921855 ] Peter Wolanin commented on SOLR-2168: - If you want to start using git more widely for devlopement (assuming people still post the final patches as attachments here) you might want to set up a canonical mirror some place on github so that everyone uses the same initial tree. We have this for Drupal: http://github.com/drupal/drupal and mirroring out of svn is probably even easier if someone has a server and can just run a script on cron every ~15 min. Velocity facet output for facet missing --- Key: SOLR-2168 URL: https://issues.apache.org/jira/browse/SOLR-2168 Project: Solr Issue Type: Bug Components: Response Writers Affects Versions: 3.1 Reporter: Peter Wolanin Priority: Minor Attachments: SOLR-2168.patch If I add fact.missing to the facet params for a field, the Veolcity output has in the facet list: $facet.name (9220) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: possible to filter the output to commits@ list????
Hi Daniel, I already proposed something similar to our developers list and also explained, why it is a bad idea to merge single files (see attached mail). Still our/my opinion: It may be better, to simply provide optionally plain diffs without metadata for commit mails (on request of a project, e.g. Grant Ingersoll could configure that for the Lucene project in the SVN config file). Uwe - Uwe Schindler uschind...@apache.org Apache Lucene PMC Member / Committer Bremen, Germany http://lucene.apache.org/ -Original Message- From: Daniel Shahaf [mailto:d...@daniel.shahaf.name] Sent: Sunday, October 17, 2010 5:17 PM To: Uwe Schindler Cc: dev@lucene.apache.org; 'Apache Infrastructure' Subject: Re: possible to filter the output to commits@ list Or you could try to see if some of the existing mergeinfo can be removed, and try to have less subtree mergeinfo in the first place; these are common topics on us...@subversion. Uwe Schindler wrote on Sun, Oct 17, 2010 at 17:06:44 +0200: Hi all, There are configuration options in SVN to let the commit mails pipe through a filter. Other svn hosting providers like Sourceforge provide some filters to be applied. Ideally, you would use the patchutils package (it's called like this in Debian/Ubuntu) and use svn diff | filterdiff --clean and pipe that into an eMail. Maybe we should open an issue at infra? Uwe - Uwe Schindler uschind...@apache.org Apache Lucene PMC Member / Committer Bremen, Germany http://lucene.apache.org/ -Original Message- From: Robert Muir [mailto:rcm...@gmail.com] Sent: Sunday, October 17, 2010 4:58 PM To: dev@lucene.apache.org Subject: possible to filter the output to commits@ list Lets say i change a single line of code, and merge it back to the 3x_branch. Currently we get 6 or 7 emails of mergeproperty changes to the commits@ list... this is making it difficult or impossible for backports to be reviewed at all... I think this is terrible. How can we get this fixed so that these mergeprop-changes only are filtered in the emails? Someone could always click the link to viewvc or whatever if they are interested in this... - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org ---BeginMessage--- I cc'ed my previous eMail also to the in...@ao list. In general, if we cannot filter we could think about the following: - remove all mergepros recursively for trunk, but we would lose history for merges. - Alternatively merge all mergeprops up to the root folder and remove it from the separate files/subfolders. This would lose some changes, but if all are merged up, that’s fine (only some files may appear as merged from different branch, although they aren't). In general when merging to reduce the amount of mergeprops: - Don't merge single files or subdirectories! Always merge the top folder! Excluded from this is trunk/modules, as this needs a separate, merging step between trunk/3x. So a merge would then only add mergeprops to root folder and modules/contrib (in branch). This is the explanation for the behavior: SVN needs to add the mergeprops to all single files that *already* contain mergeprops on each merge (SVN is not able to only store diffs in per file props). This is the reason why we have so many mergeprops at single files that every time gets updated: Once a file has a merge property, it does no longer inherit it from the parent folder. Whenever you merge something, the new rev numbers get added to the parent folder, as expected; but as this single file also have mergeprops and those are not differential/separate to parent, the merge prop of this single file also needs to be updated. This leads to the large modification email. We cannot change that anymore, so by reducing the number of single-file merges, we can stop this behavior from getting worse. To get rid of it completely and start new, see above. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Robert Muir [mailto:rcm...@gmail.com] Sent: Sunday, October 17, 2010 4:58 PM To: dev@lucene.apache.org Subject: possible to filter the output to commits@ list Lets say i change a single line of code, and merge it back to the 3x_branch. Currently we get 6 or 7 emails of mergeproperty changes to the commits@ list... this is making it difficult or impossible for backports to be reviewed at all... I think this is terrible. How can we get this fixed so that these mergeprop-changes only are filtered in the emails? Someone could always click the link to viewvc or whatever if they are interested in this... - To unsubscribe, e-mail:
Re: svn commit: r1023520 - /lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java
FYI, rather than spamming the list with backports, I'm going to start batching my backports to the 3x branch. All the svn:mergeinfo updates for even the smallest merge is a real nightmare. Merges to the 3x branch are practically unreadable since the real update is lost in the noise. -Yonik http://www.lucidimagination.com On Sun, Oct 17, 2010 at 12:28 PM, yo...@apache.org wrote: Author: yonik Date: Sun Oct 17 16:28:37 2010 New Revision: 1023520 URL: http://svn.apache.org/viewvc?rev=1023520view=rev Log: close the request we created Modified: lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java Modified: lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java URL: http://svn.apache.org/viewvc/lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java?rev=1023520r1=1023519r2=1023520view=diff == --- lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java (original) +++ lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java Sun Oct 17 16:28:37 2010 @@ -23,6 +23,7 @@ import org.apache.solr.common.params.Sol import org.apache.solr.common.util.NamedList; import org.apache.solr.common.util.XML; import org.apache.solr.core.SolrCore; +import org.apache.solr.request.LocalSolrQueryRequest; import org.apache.solr.request.SolrQueryRequest; import org.apache.solr.request.SolrQueryRequestBase; import org.apache.solr.response.SolrQueryResponse; @@ -117,15 +118,13 @@ public class XmlUpdateRequestHandler ext */ @Deprecated public void doLegacyUpdate(Reader input, Writer output) { - try { - SolrCore core = SolrCore.getSolrCore(); + SolrCore core = SolrCore.getSolrCore(); + SolrQueryRequest req = new LocalSolrQueryRequest(core, new HashMapString,String[]()); + try { // Old style requests do not choose a custom handler UpdateRequestProcessorChain processorFactory = core.getUpdateProcessingChain(null); - SolrParams params = new MapSolrParams(new HashMapString, String()); - SolrQueryRequestBase req = new SolrQueryRequestBase(core, params) { - }; SolrQueryResponse rsp = new SolrQueryResponse(); // ignored XMLStreamReader parser = inputFactory.createXMLStreamReader(input); UpdateRequestProcessor processor = processorFactory.createProcessor(req, rsp); @@ -142,6 +141,9 @@ public class XmlUpdateRequestHandler ext log.error(Error writing to output stream: + ee); } } + finally { + req.close(); + } } SolrInfoMBeans methods // - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1023520 - /lucene/dev/trunk/solr/src/java/org/apache/solr/handler/XmlUpdateRequestHandler.java
On Sun, Oct 17, 2010 at 12:33 PM, Yonik Seeley yo...@lucidimagination.com wrote: FYI, rather than spamming the list with backports, I'm going to start batching my backports to the 3x branch. All the svn:mergeinfo updates for even the smallest merge is a real nightmare. Merges to the 3x branch are practically unreadable since the real update is lost in the noise. And its only going to get worse, I mean its pretty likely we might do more refactoring between Lucene/Solr, moving files around and requiring more mergeprops. If we don't do something to filter the diffs, I can easily see a single line merge being 20 emails soon. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Created: (LUCENE-2713) TestPhraseQuery.testRandomPhrases takes minutes to run with SimpleText
TestPhraseQuery.testRandomPhrases takes minutes to run with SimpleText -- Key: LUCENE-2713 URL: https://issues.apache.org/jira/browse/LUCENE-2713 Project: Lucene - Java Issue Type: Bug Components: Tests Affects Versions: 4.0 Reporter: Robert Muir Fix For: 4.0 This test takes a few minutes to run if it gets simpletext codec. On hudson, it took 15 minutes! I added an assumeFalse(simpleText) as a temporary workaround, but we should see if there is somethign we can improve so we can remove this hack. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Continuous Integration builds for branches
On Sun, Oct 17, 2010 at 2:22 PM, Robert Muir rcm...@gmail.com wrote: On Sun, Oct 17, 2010 at 3:28 AM, Simon Willnauer simon.willna...@googlemail.com wrote: I bring this on the list from a quick discussion on IRC with uwe. Since we have these very valuable tests for Solr/Lucene trunk I would like to propose the same thing for the big feature branches like realtime and docvalues. We usually move features to trunk to let them bake in a bit and lets random test run to catch problems early. I have a few concerns: * If a branch is stable enough that you are worried about running 24/7 random tests to find little things, I think that means its time to merge it to trunk? True, but that still helps to find those little things that come out if you run on trunk which can be fixed earlier. I don't think that shooting those build mails to the dev list makes any sense. it should go to individuals working on that branch. We have the infrastructure why not running such a build a on commit and once or twice a day... I don't see any reason why we shouldn't * In a lot of cases, I think a branch is going to be a bit too unstable for this kinda thing, people are going to be ripping things apart, thats why they have a branch. Even trunk seems a bit too unstable with the tests at the moment (though its improving) See my comment about the mails above. * In a lot of cases a branch is a single person, maybe two working on it. If tests start bombing, maybe they are on vacation for two weeks and thats a lot of spam. Again, above. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-3.x #197
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/197/changes Changes: [uschindler] update ANT to 1.7.1 -- Started by user uschindler Building remotely on lucene Reverting http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x At revision 1023542 Reverting http://svn.apache.org/repos/asf/lucene/dev/nightly Updating http://svn.apache.org/repos/asf/lucene/dev/nightly U hudson-settings.sh At revision 1023542 no change for http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x since the previous build [Lucene-Solr-tests-only-3.x] $ /bin/bash -xe /var/tmp/hudson6997416937194771347.sh + sh https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/nightly/hudson-lusolr-tests-3.x.sh + . https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/nightly/hudson-settings.sh + ANT_HOME=/home/hudson/tools/ant/latest1.7 + SVNVERSION_EXE=svnversion + SVN_EXE=svn + CLOVER=/home/hudson/tools/clover/clover2latest + JAVA_HOME_15=/home/hudson/tools/java/latest1.5 + JAVA_HOME_16=/home/hudson/tools/java/latest1.6 + TESTS_MULTIPLIER=3 + ROOT_DIR=checkout + CORE_DIR=checkout/lucene + MODULES_DIR=checkout/modules + SOLR_DIR=checkout/solr + ARTIFACTS=https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/artifacts + MAVEN_ARTIFACTS=https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/maven_artifacts + JAVADOCS_ARTIFACTS=https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/javadocs + cd https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/checkout + JAVA_HOME=/home/hudson/tools/java/latest1.5 /home/hudson/tools/ant/latest1.7/bin/ant clean Buildfile: build.xml clean: clean: [delete] Deleting directory /usrhttps://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/checkout/lucene/build BUILD FAILED /usrhttps://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/checkout/build.xml:56: The following error occurred while executing this line: /usrhttps://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/checkout/lucene/common-build.xml:174: Unable to delete directory /usrhttps://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/ws/checkout/lucene/build Total time: 2 seconds Recording test results - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-3.x #198
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/198/ -- [...truncated 4039 lines...] [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.021 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestRamUsageEstimator [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.014 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestStressIndexing [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 6.207 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestTerm [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.008 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestTermVectorsReader [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.054 sec [junit] [junit] Testsuite: org.apache.lucene.queryParser.TestMultiAnalyzer [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.04 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestDateFilter [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.027 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestFieldCacheTermsFilter [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.022 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestFilteredSearch [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.025 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestBoolean2 [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 9.955 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestBooleanScorer [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.lucene.search.function.TestCustomScoreQuery [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 7.025 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestCustomSearcherSort [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.148 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestBasics [junit] Tests run: 23, Failures: 0, Errors: 0, Time elapsed: 5.718 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestSpanExplanationsOfNonMatches [junit] Tests run: 31, Failures: 0, Errors: 0, Time elapsed: 0.062 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestLockFactory [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 4.06 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestFieldCacheSanityChecker [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.295 sec [junit] [junit] Testsuite: org.apache.lucene.util.TestSetOnce [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestMultiValuedNumericRangeQuery [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 14.635 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestParallelMultiSearcher [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.127 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestQueryTermVector [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.006 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestSloppyPhraseQuery [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.347 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestTermScorer [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.027 sec [junit] [junit] Testsuite: org.apache.lucene.search.function.TestOrdValues [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.177 sec [junit] [junit] Testsuite: org.apache.lucene.search.spans.TestSpans [junit] Tests run: 25, Failures: 0, Errors: 0, Time elapsed: 0.644 sec [junit] [junit] Testsuite: org.apache.lucene.store.TestRAMDirectory [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.114 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestIndexWriter [junit] Tests run: 116, Failures: 0, Errors: 0, Time elapsed: 45.867 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestIndexWriterConfig [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.011 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestNewestSegment [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.023 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestPersistentSnapshotDeletionPolicy [junit] Tests run: 14, Failures: 0, Errors: 0, Time elapsed: 3.478 sec [junit] [junit] Testsuite: org.apache.lucene.index.TestRollback [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.009 sec [junit] [junit] Testsuite: org.apache.lucene.search.TestNumericRangeQuery64 [junit] Tests
Re: [jira] Commented: (SOLR-1311) pseudo-field-collapsing
On Sun, Oct 17, 2010 at 3:43 AM, Simon Willnauer simon.willna...@googlemail.com wrote: Lets keep things constructive here! +1 We should strive to think and talk in terms of improvements. As it relates to field collapsing, some of the existing patches were simply too big for me to get my arms around. I personally needed to start smaller to really understand it all, and then build from there. There were a lot of great ideas over the years, and hopefully much of those will find their way into trunk eventually. I did try to give credit (in CHANGES.txt) to everyone who contributed patches - it's all valuable. -Yonik http://www.lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Continuous Integration builds for branches
On Sun, Oct 17, 2010 at 1:41 PM, Simon Willnauer simon.willna...@googlemail.com wrote: True, but that still helps to find those little things that come out if you run on trunk which can be fixed earlier. I don't think that shooting those build mails to the dev list makes any sense. it should go to individuals working on that branch. I don't think it should send private emails. We have the infrastructure why not running such a build a on commit and once or twice a day... I don't see any reason why we shouldn't one or twice a day, this is different than 24/7. Then again, building on commit for a branch with one guy working on it (especially since i don't see a particularly large amount of commits going to any branches) seems overkill: that person could just run ant test themselves. And at the moment, I have trouble understanding how this would be really helpful since trunk/branch-3x itself are not even yet stable with regards to tests... - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: Continuous Integration builds for branches
The job is already there, but not yet enabled: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-docvalues-branch/ I just want to know: - When to run - Send emails to which address It uses the trunk test script from svn/nightly. It just checks out another branch to workspace/checkout. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Simon Willnauer [mailto:simon.willna...@googlemail.com] Sent: Sunday, October 17, 2010 8:53 PM To: dev@lucene.apache.org Subject: Re: Continuous Integration builds for branches On Sun, Oct 17, 2010 at 8:46 PM, Robert Muir rcm...@gmail.com wrote: On Sun, Oct 17, 2010 at 1:41 PM, Simon Willnauer simon.willna...@googlemail.com wrote: True, but that still helps to find those little things that come out if you run on trunk which can be fixed earlier. I don't think that shooting those build mails to the dev list makes any sense. it should go to individuals working on that branch. I don't think it should send private emails. We have the infrastructure why not running such a build a on commit and once or twice a day... I don't see any reason why we shouldn't one or twice a day, this is different than 24/7. Then again, building on commit for a branch with one guy working on it (especially since i don't see a particularly large amount of commits going to any branches) seems overkill: that person could just run ant test themselves. I don't think hitting copy job and change the url is near any overkill but I can simply start my hudson boxes again to run the branch tests. And at the moment, I have trouble understanding how this would be really helpful since trunk/branch-3x itself are not even yet stable with regards to tests... as helpful as it is for branch-3x IMO but I don't have too strong feelings about it though. simon - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: Continuous Integration builds for branches
: - Send emails to which address There's no requirement that Hudson builds send email to anyone at all. People who are interested in specific branches can alwyas subscribe to the RSS feed for that branch. There are also options to only have hudson send emails to the specific individuals who made commits that were inclucded in the changeset for a build. I say for the non-major branches, remove the d...@lucene notifier, add the Send separate e-mails to individuals who broke the build option, and let people who care (but are not actively committing) subscribe to the RSS feed. -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: SolrInfoMBeanTest
Haha, maybe we found the func issue :-) Really bad test, why does it need to instantiate. Does loading and inspecting the class is not enough? Or does it init static props? - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley Sent: Sunday, October 17, 2010 9:35 PM To: dev@lucene.apache.org Subject: Re: SolrInfoMBeanTest Ha - going through and instantiating a ton of random classes does sound like a good way to screw things up! +1 to disable. -Yonik http://www.lucidimagination.com On Sun, Oct 17, 2010 at 3:22 PM, Robert Muir rcm...@gmail.com wrote: Can we disable this test? I don't understand the purpose of it, and it seems to only cause other tests to fail. For example: [junit] NOTE: all tests run in this JVM: [junit] [SolrInfoMBeanTest, TestGroupingSearch] [junit] - --- [junit] TEST org.apache.solr.TestGroupingSearch FAILED i already had to hack this test once to prevent TestReplicationHandler from always failing: // FIXME: Find the static/sysprop/file leakage here. // If we call Class.forName(ReplicationHandler) here, its test will later fail // when run inside the same JVM (-Dtests.threadspercpu=0), so something is wrong. if (file.contains(ReplicationHandler)) continue; the test seems really silly, it loads up everything in its classpath and assertNotNull's against toString-type things, with the description of A simple test used to increase code coverage for some standard things... - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (SOLR-2124) SEVERE exceptions are being logged for expected PingRequestHandler SERVICE_UNAVAILABLE exceptions
[ https://issues.apache.org/jira/browse/SOLR-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921895#action_12921895 ] Shawn Heisey commented on SOLR-2124: I spent a while trying to work out the changes required to use logOnce and have it work as expected, got overwhelmed and lost. Then I started with a fresh source tarball and tried to figure out how to just get it to log an error instead of throwing an exception. I lack the necessary java skills to do something so basic. SEVERE exceptions are being logged for expected PingRequestHandler SERVICE_UNAVAILABLE exceptions - Key: SOLR-2124 URL: https://issues.apache.org/jira/browse/SOLR-2124 Project: Solr Issue Type: Bug Reporter: Hoss Man Fix For: 3.1, 4.0 As reported by a user, if you use the PingRequestHandler, and the corrisponding helthcheck file doesn't exist (and expected situation when a server is out of rotation) Solr is logging a SEVERE error... {noformat} SEVERE: org.apache.solr.common.SolrException: Service disabled at org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:48) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1324) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {noformat} This is in spite of hte fact that PingRequestHandler explicitly sets the alreadyLogged boolean to true in the SolrException constructor. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] Commented: (SOLR-1924) Solr's updateRequestHandler does not have a fast way of guaranteeing document delivery
[ https://issues.apache.org/jira/browse/SOLR-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12921919#action_12921919 ] Jan Høydahl commented on SOLR-1924: --- In a multi node environment, it would also be useful to maintain state as to whether a batch is replicated to the slaves. This is because in case of disaster crash on a master, the feeding client may have got callback that a batch is secured, but it was not yet replicated, i.e. the only copy was on the now crashed master. The master should be able to keep track of whether at least one replica has fetched a certain version of the index through the ReplicationHandler. In this way, a client could choose to act on the replication status instead of persisted status. The STATUS operation would now return an additional state: replicated count=1fooBar/replicated persisted count=2fooBar0001 fooBar0002/persisted pending count=1fooBar0003/pending Solr's updateRequestHandler does not have a fast way of guaranteeing document delivery -- Key: SOLR-1924 URL: https://issues.apache.org/jira/browse/SOLR-1924 Project: Solr Issue Type: Bug Affects Versions: 1.4 Reporter: Karl Wright It is currently not possible, without performing a commit on every document, to use updateRequestHandler to guarantee delivery into the index of any document. The reason is that whenever Solr is restarted, some or all documents that have not been committed yet are dropped on the floor, and there is no way for a client of updateRequestHandler to know which ones this happened to. I believe it is not even possible to write a middleware-style layer that stores documents and performs periodic commits on its own, because the update request handler never ACKs individual documents on a commit, but merely everything it has seen since the last time Solr bounced. So you have this potential scenario: - middleware layer receives document 1, saves it - middleware layer receives document 2, saves it Now it's time for the commit, so: - middleware layer sends document 1 to updateRequestHandler - solr is restarted, dropping all uncommitted documents on the floor - middleware layer sends document 2 to updateRequestHandler - middleware layer sends COMMIT to updateRequestHandler, but solr adds only document 2 to the index - middleware believes incorrectly that it has successfully committed both documents An ideal solution would be for Solr to separate the semantics of commit (the index building variety) from the semantics of commit (the 'I got the document' variety). Perhaps this will involve a persistent document queue that will persist over a Solr restart. An alternative mechanism might be for updateRequestHandler to acknowledge specifically committed documents in its response to an explicit commit. But this would make it difficult or impossible to use autocommit usefully in such situations. The only other alternative is to require clients that need guaranteed delivery to commit on every document, with a considerable performance penalty. This ticket is related to LCF in that LCF is one of the clients that really needs some kind of guaranteed delivery mechanism. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Build failed in Hudson: Lucene-Solr-tests-only-3.x #217
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/217/ -- [...truncated 9516 lines...] [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 1.871 sec [junit] [junit] Testsuite: org.apache.solr.search.function.SortByFunctionTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.573 sec [junit] [junit] Testsuite: org.apache.solr.update.DocumentBuilderTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.33 sec [junit] [junit] Testsuite: org.apache.solr.servlet.DirectSolrConnectionTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.765 sec [junit] [junit] - Standard Error - [junit] 10/18/2010 07:04:53 AM org.apache.solr.schema.IndexSchema readSchema [junit] GRAVE: uniqueKey should not be multivalued [junit] 10/18/2010 07:04:53 AM org.apache.solr.schema.IndexSchema readSchema [junit] GRAVE: uniqueKey should not be multivalued [junit] - --- [junit] Testsuite: org.apache.solr.TestDistributedSearch [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 47.25 sec [junit] [junit] Testsuite: org.apache.solr.analysis.CommonGramsFilterFactoryTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.053 sec [junit] [junit] Testsuite: org.apache.solr.analysis.DoubleMetaphoneFilterFactoryTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.019 sec [junit] [junit] Testsuite: org.apache.solr.util.SolrPluginUtilsTest [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.857 sec [junit] [junit] Testsuite: org.apache.solr.analysis.EnglishPorterFilterFactoryTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.04 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestGermanStemFilterFactory [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.019 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestGreekLowerCaseFilterFactory [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.027 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestLuceneMatchVersion [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.304 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestNGramFilters [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 0.095 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestSynonymMap [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.037 sec [junit] [junit] Testsuite: org.apache.solr.analysis.TestWikipediaTokenizerFactory [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.047 sec [junit] [junit] Testsuite: org.apache.solr.client.solrj.TestBatchUpdate [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.974 sec [junit] [junit] Testsuite: org.apache.solr.update.AutoCommitTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.867 sec [junit] [junit] Testsuite: org.apache.solr.util.ArraysUtilsTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.061 sec [junit] [junit] Testsuite: org.apache.solr.client.solrj.beans.TestDocumentObjectBinder [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.194 sec [junit] [junit] Testsuite: org.apache.solr.handler.component.DistributedTermsComponentTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 13.215 sec [junit] [junit] Testsuite: org.apache.solr.client.solrj.embedded.MergeIndexesEmbeddedTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.541 sec [junit] [junit] Testsuite: org.apache.solr.common.params.ModifiableSolrParamsTest [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.016 sec [junit] [junit] Testsuite: org.apache.solr.response.TestCSVResponseWriter [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.631 sec [junit] [junit] Testsuite: org.apache.solr.common.util.TestJavaBinCodec [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.202 sec [junit] [junit] Testsuite: org.apache.solr.schema.NotRequiredUniqueKeyTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.163 sec [junit] [junit] - Standard Error - [junit] 18-ott-2010 13:04:58 org.apache.solr.schema.IndexSchema readSchema [junit] GRAVE: uniqueKey should not be multivalued [junit] - --- [junit] Testsuite: org.apache.solr.core.RAMDirectoryFactoryTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.006 sec [junit] [junit] Testsuite: org.apache.solr.search.TestExtendedDismaxParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed:
Hudson build is back to normal : Lucene-Solr-tests-only-trunk #224
See https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/224/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org