[jira] [Commented] (LUCENENET-498) BrazilianAnalyzer
[ https://issues.apache.org/jira/browse/LUCENENET-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406105#comment-13406105 ] Prescott Nasser commented on LUCENENET-498: --- Google Translate: I embarked on some tests Lucene.NET where I plan to delve more into the components and contribute with the community. For some tests, No. Fazer get the work BrazilianAnayzer Numa conusulta using BooleanQuery TermQuery with clauses. Testei with lowercase, uppercase, with almost the same indexing. Verifiquei if the Analyzer estava Correct. The columns that result No. traziam eram ANALYZED. I am starting this PROJECT Minhou contribuição half and still speculative. If ja Issus had been solved, I Sorry. I will be DOING algumas but searches in the history. The filters in the fields NOT_ANALYZED funcionaram, by Issus think seja um problem BrazilianAnalyzer analyzer. The same query works in queryparse. Changing to parse StandardAnalyzer também works. Verifiquei the index using Luke-ALL and No. Há problems in the index. BrazilianAnalyzer - Key: LUCENENET-498 URL: https://issues.apache.org/jira/browse/LUCENENET-498 Project: Lucene.Net Issue Type: Bug Components: Lucene.Net Contrib Affects Versions: Lucene.Net 2.9.2 Environment: Windows 7 Ultimate 32bits, Visual Studio 2010, Framework 3.5 Reporter: Marcelo Vieira das Neves Fix For: Lucene.Net 2.9.2 Iniciei alguns testes no Lucene.NET, onde pretendo me aprofundar mais nos componentes e contribuir com a comunidade. Durante alguns testes, não consegui fazer o BrazilianAnayzer funcionar numa conusulta usando BooleanQuery com cláusulas TermQuery. Testei com lowercase, uppercase, com o mesmo case da indexação. Verifiquei se o analyzer estava correto. As colunas que não traziam resultado eram ANALYZED. Estou iniciando minha contribuição neste projeto e ainda meio especulativo. Se isso já tiver sido solucionado, me desculpem. Estarei fazendo mais algumas pesquisas dentro do histórico. Os filtros nos campos NOT_ANALYZED funcionaram, por isso acho que seja um problema no analisador BrazilianAnalyzer. A mesma consulta no queryparse funciona. Trocando para analisador StandardAnalyzer também funciona. Verifiquei o índice usando o LUKE-ALL e não há problemas no índice. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LUCENENET-498) BrazilianAnalyzer
[ https://issues.apache.org/jira/browse/LUCENENET-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406106#comment-13406106 ] Prescott Nasser commented on LUCENENET-498: --- Marcelo - have you tried upgrading to 2.9.4? Also, if are you interested in contributing, I would suggest you work with what we have in the trunk, so that all the changes we have been making are incorporated. BrazilianAnalyzer - Key: LUCENENET-498 URL: https://issues.apache.org/jira/browse/LUCENENET-498 Project: Lucene.Net Issue Type: Bug Components: Lucene.Net Contrib Affects Versions: Lucene.Net 2.9.2 Environment: Windows 7 Ultimate 32bits, Visual Studio 2010, Framework 3.5 Reporter: Marcelo Vieira das Neves Fix For: Lucene.Net 2.9.2 Iniciei alguns testes no Lucene.NET, onde pretendo me aprofundar mais nos componentes e contribuir com a comunidade. Durante alguns testes, não consegui fazer o BrazilianAnayzer funcionar numa conusulta usando BooleanQuery com cláusulas TermQuery. Testei com lowercase, uppercase, com o mesmo case da indexação. Verifiquei se o analyzer estava correto. As colunas que não traziam resultado eram ANALYZED. Estou iniciando minha contribuição neste projeto e ainda meio especulativo. Se isso já tiver sido solucionado, me desculpem. Estarei fazendo mais algumas pesquisas dentro do histórico. Os filtros nos campos NOT_ANALYZED funcionaram, por isso acho que seja um problema no analisador BrazilianAnalyzer. A mesma consulta no queryparse funciona. Trocando para analisador StandardAnalyzer também funciona. Verifiquei o índice usando o LUKE-ALL e não há problemas no índice. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LUCENENET-498) BrazilianAnalyzer
[ https://issues.apache.org/jira/browse/LUCENENET-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406136#comment-13406136 ] Marcelo Vieira das Neves commented on LUCENENET-498: The solution that I was testing the integration with Lucene.net is a product that is using framework 3.5. I could not use the version 2.9.4 which accused the target framework was different. I had this need to the solution in 2.9.2. After leaving for the new version. I'll try to find the new version fixes in order to implement in version 2.92. for my immediate use. BrazilianAnalyzer - Key: LUCENENET-498 URL: https://issues.apache.org/jira/browse/LUCENENET-498 Project: Lucene.Net Issue Type: Bug Components: Lucene.Net Contrib Affects Versions: Lucene.Net 2.9.2 Environment: Windows 7 Ultimate 32bits, Visual Studio 2010, Framework 3.5 Reporter: Marcelo Vieira das Neves Fix For: Lucene.Net 2.9.2 Iniciei alguns testes no Lucene.NET, onde pretendo me aprofundar mais nos componentes e contribuir com a comunidade. Durante alguns testes, não consegui fazer o BrazilianAnayzer funcionar numa conusulta usando BooleanQuery com cláusulas TermQuery. Testei com lowercase, uppercase, com o mesmo case da indexação. Verifiquei se o analyzer estava correto. As colunas que não traziam resultado eram ANALYZED. Estou iniciando minha contribuição neste projeto e ainda meio especulativo. Se isso já tiver sido solucionado, me desculpem. Estarei fazendo mais algumas pesquisas dentro do histórico. Os filtros nos campos NOT_ANALYZED funcionaram, por isso acho que seja um problema no analisador BrazilianAnalyzer. A mesma consulta no queryparse funciona. Trocando para analisador StandardAnalyzer também funciona. Verifiquei o índice usando o LUKE-ALL e não há problemas no índice. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (LUCENE-4099) Remove generics from SpatialStrategy and remove SpatialFieldInfo
[ https://issues.apache.org/jira/browse/LUCENE-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated LUCENE-4099: - Attachment: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Here is a patch. This is a major change to the API. The SpatialFieldInfo and its subclasses are all gone, as well as the generics declarations on SpatialStrategy and SpatialStrategyTestCase. Essentially the related functionality was inlined into each strategy. There is a little duplication between TwoDoublesStrategy BBoxStrategy to create a double and hold a configurable precisionStep. I could have simply put the String field name into each method argument position which took a fieldInfo, but instead I thought it nicer to put the field name into the Strategy instance itself. It is initialized as the 2nd constructor argument. This does tie a Strategy instance to a field, but I think that's fine and I'm quite happy with the overall affect. Remove generics from SpatialStrategy and remove SpatialFieldInfo Key: LUCENE-4099 URL: https://issues.apache.org/jira/browse/LUCENE-4099 Project: Lucene - Java Issue Type: Improvement Components: modules/spatial Reporter: Chris Male Assignee: David Smiley Priority: Minor Attachments: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Same time ago I added SpatialFieldInfo as a way for SpatialStrategys to declare what information they needed per request. This meant that a Strategy could be used across multiple requests. However it doesn't really need to be that way any more, Strategies are light to instantiate and the generics are just clumsy and annoying. Instead Strategies should just define what they need in their constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: buildbot failure in ASF Buildbot on lucene-site-staging
thanks for trying so hard robert!! I am sorry that it is such a pain! simon On Tue, Jul 3, 2012 at 12:27 AM, Robert Muir rcm...@gmail.com wrote: I'm throwing in the towel. I've been trying to publish javadocs to the website for almost 10 hours now. On Mon, Jul 2, 2012 at 6:26 PM, build...@apache.org wrote: The Buildbot has detected a new failure on builder lucene-site-staging while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/lucene-site-staging/builds/209 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: bb-cms-slave Build Reason: scheduler Build Source Stamp: [branch lucene/cms] 1356506 Blamelist: rmuir BUILD FAILED: failed shell sincerely, -The Buildbot -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: buildbot failure in ASF Buildbot on lucene-site-staging
FYI, I've managed to commit the 4.0.0-ALPHA javadocs, with help from Joe Schaefer and Gavin McDonald on #asfinfra. I'm writing it up on the ReleaseTodo wiki page now. Steve -Original Message- From: Simon Willnauer [mailto:simon.willna...@gmail.com] Sent: Tuesday, July 03, 2012 3:44 AM To: dev@lucene.apache.org Subject: Re: buildbot failure in ASF Buildbot on lucene-site-staging thanks for trying so hard robert!! I am sorry that it is such a pain! simon On Tue, Jul 3, 2012 at 12:27 AM, Robert Muir rcm...@gmail.com wrote: I'm throwing in the towel. I've been trying to publish javadocs to the website for almost 10 hours now. On Mon, Jul 2, 2012 at 6:26 PM, build...@apache.org wrote: The Buildbot has detected a new failure on builder lucene-site-staging while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/lucene-site-staging/builds/209 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: bb-cms-slave Build Reason: scheduler Build Source Stamp: [branch lucene/cms] 1356506 Blamelist: rmuir BUILD FAILED: failed shell sincerely, -The Buildbot -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: buildbot failure in ASF Buildbot on lucene-site-staging
Here's the changed section of the ReleaseTodo wiki page: http://wiki.apache.org/lucene-java/ReleaseTodo#Pushing_website_changes_.26_Announcing. -Original Message- From: Steven A Rowe [mailto:sar...@syr.edu] Sent: Tuesday, July 03, 2012 3:49 AM To: dev@lucene.apache.org; simon.willna...@gmail.com Subject: RE: buildbot failure in ASF Buildbot on lucene-site-staging FYI, I've managed to commit the 4.0.0-ALPHA javadocs, with help from Joe Schaefer and Gavin McDonald on #asfinfra. I'm writing it up on the ReleaseTodo wiki page now. Steve -Original Message- From: Simon Willnauer [mailto:simon.willna...@gmail.com] Sent: Tuesday, July 03, 2012 3:44 AM To: dev@lucene.apache.org Subject: Re: buildbot failure in ASF Buildbot on lucene-site-staging thanks for trying so hard robert!! I am sorry that it is such a pain! simon On Tue, Jul 3, 2012 at 12:27 AM, Robert Muir rcm...@gmail.com wrote: I'm throwing in the towel. I've been trying to publish javadocs to the website for almost 10 hours now. On Mon, Jul 2, 2012 at 6:26 PM, build...@apache.org wrote: The Buildbot has detected a new failure on builder lucene-site-staging while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/lucene-site-staging/builds/209 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: bb-cms-slave Build Reason: scheduler Build Source Stamp: [branch lucene/cms] 1356506 Blamelist: rmuir BUILD FAILED: failed shell sincerely, -The Buildbot -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org B CB [ X ܚX KK[XZ[ ] ][ X ܚX PX [ K \X K ܙ B ܈Y][ۘ[ [X[ K[XZ[ ] Z[X [ K \X K ܙ B B
[jira] [Resolved] (LUCENE-4183) Simplify CompoundFileDirectory opening in 4.x
[ https://issues.apache.org/jira/browse/LUCENE-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-4183. --- Resolution: Fixed Fix Version/s: 5.0 Committed the new pattern to 4.x revision: 1356608 I merged the cleanup to trunk, but without backwards, just the pattern, revision: 1356614 Simplify CompoundFileDirectory opening in 4.x - Key: LUCENE-4183 URL: https://issues.apache.org/jira/browse/LUCENE-4183 Project: Lucene - Java Issue Type: Improvement Affects Versions: 4.0 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 4.0, 5.0 Attachments: LUCENE-4183-priorE-pattern.patch, LUCENE-4183.patch, LUCENE-4183.patch The compiler bug in JDK 8EA let me look at the code again. I opened bug report with simple test case at Oracle, but the code on our side is still too complicated to understand. The attached path for 4.x removes the nested try-finaly block and simpliefies success=true handling (which was in fact broken). It uses a more try-with-resources like approach with only one finally block. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-tests-only-trunk-java7 - Build # 2858 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk-java7/2858/ 2 tests failed. REGRESSION: org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta3.testCompositePk_DeltaImport_add Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([9DBB2698507F6FCC:4F5DF3C2EA55BC87]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:461) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:428) at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta3.testCompositePk_DeltaImport_add(TestSqlEntityProcessorDelta3.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53) at org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//*[@numFound='2'] xml response was: ?xml version=1.0 encoding=UTF-8? response lst name=responseHeaderint name=status0/intint
[JENKINS] Lucene-Solr-4.x-Linux-Java8-64 - Build # 2 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux-Java8-64/2/ 1 tests failed. FAILED: org.apache.solr.handler.component.SpellCheckComponentTest.testPerDictionary Error Message: mismatch: '0'!='2' @ spellcheck/suggestions/bar/startOffset Stack Trace: java.lang.RuntimeException: mismatch: '0'!='2' @ spellcheck/suggestions/bar/startOffset at __randomizedtesting.SeedInfo.seed([1FA56F87F03032D0:D8AB9B2FF85C2798]:0) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:547) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:495) at org.apache.solr.handler.component.SpellCheckComponentTest.testPerDictionary(SpellCheckComponentTest.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:474) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53) at org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551) Build Log: [...truncated 8111 lines...] [junit4] Suite: org.apache.solr.handler.component.SpellCheckComponentTest [junit4] (@BeforeClass output) [junit4] 2 0
Re: buildbot failure in ASF Buildbot on lucene-site-staging
this is really good to hear. i didn't think it was possible. Thank you! On Tue, Jul 3, 2012 at 3:49 AM, Steven A Rowe sar...@syr.edu wrote: FYI, I've managed to commit the 4.0.0-ALPHA javadocs, with help from Joe Schaefer and Gavin McDonald on #asfinfra. I'm writing it up on the ReleaseTodo wiki page now. Steve -Original Message- From: Simon Willnauer [mailto:simon.willna...@gmail.com] Sent: Tuesday, July 03, 2012 3:44 AM To: dev@lucene.apache.org Subject: Re: buildbot failure in ASF Buildbot on lucene-site-staging thanks for trying so hard robert!! I am sorry that it is such a pain! simon On Tue, Jul 3, 2012 at 12:27 AM, Robert Muir rcm...@gmail.com wrote: I'm throwing in the towel. I've been trying to publish javadocs to the website for almost 10 hours now. On Mon, Jul 2, 2012 at 6:26 PM, build...@apache.org wrote: The Buildbot has detected a new failure on builder lucene-site-staging while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/lucene-site-staging/builds/209 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: bb-cms-slave Build Reason: scheduler Build Source Stamp: [branch lucene/cms] 1356506 Blamelist: rmuir BUILD FAILED: failed shell sincerely, -The Buildbot -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4083) RateLimited.pause() throws unchecked exception
[ https://issues.apache.org/jira/browse/LUCENE-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405810#comment-13405810 ] Dawid Weiss commented on LUCENE-4083: - Just a comment on the belated follow-up check that I did. If the main test's thread times out, any forked threads will also be interrupted/ terminated as a consequence of this. Any exceptions that happen after the main thread is terminated will be wrapped as in here (note after termination attempt). {code} java.lang.RuntimeException: Thread threw an uncaught exception (after termination attempt), thread: Thread[spinning,5,] at com.carrotsearch.randomizedtesting.RunnerThreadGroup.processUncaught(RunnerThreadGroup.java:97) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:857) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$7(RandomizedRunner.java:804) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.RuntimeException: foobar at __randomizedtesting.SeedInfo.seed([54B0307E3EA5B518]:0) at com.carrotsearch.randomizedtesting.TestTestTimeoutAndRunawayThreadException$Nested$1.run(TestTestTimeoutAndRunawayThreadException.java:22) {code} So I don't think the above exception was a result of after-the-timeout exception, rather something else. RateLimited.pause() throws unchecked exception -- Key: LUCENE-4083 URL: https://issues.apache.org/jira/browse/LUCENE-4083 Project: Lucene - Java Issue Type: Bug Components: core/store Affects Versions: 4.0 Reporter: Andrzej Bialecki Fix For: 4.0 The while() loop in RateLimiter.pause() invokes Thread.sleep() with potentially large values, which occasionally results in InterruptedException being thrown from Thread.sleep(). This is wrapped in an unchecked ThreadInterruptedException and re-thrown, and results in high-level errors like this: {code} [junit] 2012-05-29 15:50:15,464 ERROR core.SolrCore - org.apache.lucene.util.ThreadInterruptedException: java.lang.InterruptedException: sleep interrupted [junit] at org.apache.lucene.store.RateLimiter.pause(RateLimiter.java:82) [junit] at org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:82) [junit] at org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:73) [junit] at org.apache.lucene.store.DataOutput.writeVInt(DataOutput.java:191) [junit] at org.apache.lucene.codecs.lucene40.Lucene40PostingsWriter.addPosition(Lucene40PostingsWriter.java:237) [junit] at org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:519) [junit] at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:92) [junit] at org.apache.lucene.index.TermsHash.flush(TermsHash.java:117) [junit] at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53) [junit] at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81) [junit] at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:475) [junit] at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422) [junit] at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553) [junit] at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2416) [junit] at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2548) [junit] at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2530) [junit] at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:414) [junit] at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) {code} I believe this is a bug - the while() loop already ensures that the total time spent in pause() is correct even if InterruptedException-s are thrown, so they should not be re-thrown. The patch is trivial - simply don't re-throw. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-trunk - Build # 1980 - Failure
Build: https://builds.apache.org/job/Lucene-trunk/1980/ All tests passed Build Log: [...truncated 37950 lines...] [...truncated 37950 lines...] [...truncated 37950 lines...] [...truncated 37950 lines...] [...truncated 37950 lines...] [...truncated 37950 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4184) Replace o.a.l.u.packed.Packed64SingleBlock with an inheritance-based impl
[ https://issues.apache.org/jira/browse/LUCENE-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-4184. -- Resolution: Fixed Committed (r1356640 on trunk and r1356647 on branch 4.x). Replace o.a.l.u.packed.Packed64SingleBlock with an inheritance-based impl - Key: LUCENE-4184 URL: https://issues.apache.org/jira/browse/LUCENE-4184 Project: Lucene - Java Issue Type: Improvement Components: core/other Reporter: Adrien Grand Assignee: Adrien Grand Priority: Trivial Fix For: 4.0 Attachments: LUCENE-4184.patch According tests that Toke Eskildsen and I performed in LUCENE-4062, this impl is consistently faster than the current one on various archs/JVMs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3587) Reloading a core is no longer updates analysis process
[ https://issues.apache.org/jira/browse/SOLR-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3587: -- Attachment: SOLR-3587.patch mentioned patch attached for posterity - I'm going to give a shot at just updating the analyzer now. Reloading a core is no longer updates analysis process -- Key: SOLR-3587 URL: https://issues.apache.org/jira/browse/SOLR-3587 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Alexey Serba Assignee: Mark Miller Priority: Blocker Fix For: 4.0, 5.0 Attachments: SOLR-3587.patch, SOLR-3587.patch It's a usual practice in Solr to overwrite synonyms/stopwords files and issue a core reload command to force Solr reload these files. We've noticed that this trick is no longer working on trunk. I started debugging this problem in Eclipse and I can see that Solr actually re-reads updated synonym file on core reload, but indexing process still uses old reference to Synonym filter / SynonymMap instance. When I start Solr initially I can see that my Solr server has 2 instances of SynonymMap (the object that holds actual synonyms data). This is expected as I have 2 SynonymFilters defined in my schema (one field with 2 synonym filters - index + query time) After a core reload I see that Solr has 4 instances of this class. So I thought that there could be some leak and issued many (N) core reload commands and expected to see 2*N instances. It didn't happen. I can only see 20 instances of this class. This is pretty interesting number as well because, according to thread dump/list, Solr/Jetty has 10 working threads in thread pool (by default for example configs). So I suspect that we cache something in thread local storage and it hits us. I looked into the code and found that Lucene caches token streams in ThreadLocal, but I don't know the code enough to state that this is the problem. So I took a different approach and found the commit that introduced this bug. I wrote a simple test (shell script) and used _git bisect_ tool to chase this bug. {quote} ffd9c717448eca895d19be8ee9718bc399ac34a7 is the first bad commit commit ffd9c717448eca895d19be8ee9718bc399ac34a7 Author: Mark Robert Miller markrmil...@apache.org Date: Thu Jun 30 13:59:59 2011 + SOLR-2193, SOLR-2565: The default Solr update handler has been improved so that it uses fewer locks, keeps the IndexWriter open rather than closing it on each commit (ie commits no longer wait for background merges to complete), works with SolrCore to provide faster 'soft' commits, and has an improved API that requires less instanceof special casing. You may now specify a 'soft' commit when committing. This will use Lucene's NRT feature to avoid guaranteeing documents are on stable storage in exchange for faster reopen times. There is also a new 'soft' autocommit tracker that can be configured. SolrCores now properly share IndexWriters across SolrCore reloads. git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1141542 13f79535-47bb-0310-9956-ffa450edef68 {quote} It looks like sharing IndexWriters across SolrCore reloads could be the root cause, right? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[ANNOUNCE] Apache Lucene 4.0-alpha released.
3 July 2012, Apache Lucene‚ 4.0-alpha available The Lucene PMC is pleased to announce the release of Apache Lucene 4.0-alpha Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform. This release contains numerous bug fixes, optimizations, and improvements, some of which are highlighted below. The release is available for immediate download at: http://lucene.apache.org/core/mirrors-core-latest-redir.html?ver=4.0a See the CHANGES.txt file included with the release for a full list of details. Lucene 4.0-alpha Release Highlights: * The index formats for terms, postings lists, stored fields, term vectors, etc are pluggable via the Codec api. You can select from the provided implementations or customize the index format with your own Codec to meet your needs. * Similarity has been decoupled from the vector space model (TF/IDF). Additional models such as BM25, Divergence from Randomness, Language Models, and Information-based models are provided (see http://www.lucidimagination.com/blog/2011/09/12/flexible-ranking-in-lucene-4). * Added support for per-document values (DocValues). DocValues can be used for custom scoring factors (accessible via Similarity), for pre-sorted Sort values, and more. * When indexing via multiple threads, each IndexWriter thread now flushes its own segment to disk concurrently, resulting in substantial performance improvements (see http://blog.mikemccandless.com/2011/05/265-indexing-speedup-with-lucenes.html). * Per-document normalization factors (norms) are no longer limited to a single byte. Similarity implementations can use any DocValues type to store norms. * Added index statistics such as the number of tokens for a term or field, number of postings for a field, and number of documents with a posting for a field: these support additional scoring models (see http://blog.mikemccandless.com/2012/03/new-index-statistics-in-lucene-40.html). * Implemented a new default term dictionary/index (BlockTree) that indexes shared prefixes instead of every n'th term. This is not only more time- and space- efficient, but can also sometimes avoid going to disk at all for terms that do not exist. Alternative term dictionary implementions are provided and pluggable via the Codec api. * Indexed terms are no longer UTF-16 char sequences, instead terms can be any binary value encoded as byte arrays. By default, text terms are now encoded as UTF-8 bytes. Sort order of terms is now defined by their binary value, which is identical to UTF-8 sort order. * Substantially faster performance when using a Filter during searching. * File-system based directories can rate-limit the IO (MB/sec) of merge threads, to reduce IO contention between merging and searching threads. * Added a number of alternative Codecs and components for different use-cases: Appending works with append-only filesystems (such as Hadoop DFS), Memory writes the entire terms+postings as an FST read into RAM (see http://blog.mikemccandless.com/2011/06/primary-key-lookups-are-28x-faster-with.html), Pulsing inlines the postings for low-frequency terms into the term dictionary (see http://blog.mikemccandless.com/2010/06/lucenes-pulsingcodec-on-primary-key.html), SimpleText writes all files in plain-text for easy debugging/transparency (see http://blog.mikemccandless.com/2010/10/lucenes-simpletext-codec.html), among others. * Term offsets can be optionally encoded into the postings lists and can be retrieved per-position. * A new AutomatonQuery returns all documents containing any term matching a provided finite-state automaton (see http://www.slideshare.net/otisg/finite-state-queries-in-lucene). * FuzzyQuery is 100-200 times faster than in past releases (see http://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html). * A new spell checker, DirectSpellChecker, finds possible corrections directly against the main search index without requiring a separate index. * Various in-memory data structures such as the term dictionary and FieldCache are represented more efficiently with less object overhead (see http://blog.mikemccandless.com/2010/07/lucenes-ram-usage-for-searching.html). * All search logic is now required to work per segment, IndexReader was therefore refactored to differentiate between atomic and composite readers (see http://blog.thetaphi.de/2012/02/is-your-indexreader-atomic-major.html). * Lucene 4.0 provides a modular API, consolidating components such as Analyzers and Queries that were previously scattered across Lucene core, contrib, and Solr. These modules also include additional functionality such as UIMA analyzer integration and a completely reworked spatial search implementation. Please read
[ANNOUNCE] Apache Solr 4.0-alpha released.
3 July 2012, Apache Solr™ 4.0-alpha available The Lucene PMC is pleased to announce the release of Apache Solr 4.0-alpha. Solr is the popular, blazing fast, open source NoSQL search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly scalable, providing fault tolerant distributed search and indexing, and powers the search and navigation features of many of the world's largest internet sites. Solr 4.0-alpha is available for immediate download at: http://lucene.apache.org/solr/mirrors-solr-latest-redir.html?ver=4.0a See the CHANGES.txt file included with the release for a full list of details. Solr 4.0-alpha Release Highlights: The largest set of features goes by the development code-name “Solr Cloud” and involves bringing easy scalability to Solr. See http://wiki.apache.org/solr/SolrCloud for more details. * Distributed indexing designed from the ground up for near real-time (NRT) and NoSQL features such as realtime-get, optimistic locking, and durable updates. * High availability with no single points of failure. * Apache Zookeeper integration for distributed coordination and cluster metadata and configuration storage. * Immunity to split-brain issues due to Zookeeper's Paxos distributed consensus protocols. * Updates sent to any node in the cluster and are automatically forwarded to the correct shard and replicated to multiple nodes for redundancy. * Queries sent to any node automatically perform a full distributed search across the cluster with load balancing and fail-over. Solr 4.0-alpha includes more NoSQL features for those using Solr as a primary data store: * Update durability – A transaction log ensures that even uncommitted documents are never lost. * Real-time Get – The ability to quickly retrieve the latest version of a document, without the need to commit or open a new searcher * Versioning and Optimistic Locking – combined with real-time get, this allows read-update-write functionality that ensures no conflicting changes were made concurrently by other clients. * Atomic updates - the ability to add, remove, change, and increment fields of an existing document without having to send in the complete document again. There are many other features coming in Solr 4, such as * Pivot Faceting – Multi-level or hierarchical faceting where the top constraints for one field are found for each top constraint of a different field. * Pseudo-fields – The ability to alias fields, or to add metadata along with returned documents, such as function query values and results of spatial distance calculations. * A spell checker implementation that can work directly from the main index instead of creating a sidecar index. * Pseudo-Join functionality – The ability to select a set of documents based on their relationship to a second set of documents. * Function query enhancements including conditional function queries and relevancy functions. * New update processors to facilitate modifying documents prior to indexing. * A brand new web admin interface, including support for SolrCloud. This is an alpha release for early adopters. The guarantee for this alpha release is that the index format will be the 4.0 index format, supported through the 5.x series of Lucene/Solr, unless there is a critical bug (e.g. that would cause index corruption) that would prevent this. Please report any feedback to the mailing lists (http://lucene.apache.org/solr/discussion.html) Happy searching, Lucene/Solr developers - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405934#comment-13405934 ] Erik Hatcher commented on SOLR-1725: bq. I would rather people have to write a few no-op methods and get good errors then rip their hair out – or have data loss – because they have a subtle typo in a function name that they don't notice No argument there. I think this just demands that we implement an add-only type of capability such that the entire script is implicitly inside a processAdd call. The most common use case here is for processAdd, and streamlining this to be easy and less-error-prone (why even have processAdd method wrapper at all if we're aiming for less hair pulling?) for the common needs. We can iterate on that after a commit of what's there though having a full update processor via script is great too, for sure. Script based UpdateRequestProcessorFactory -- Key: SOLR-1725 URL: https://issues.apache.org/jira/browse/SOLR-1725 Project: Solr Issue Type: New Feature Components: update Affects Versions: 1.4 Reporter: Uri Boness Assignee: Erik Hatcher Labels: UpdateProcessor Fix For: 4.1 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch A script based UpdateRequestProcessorFactory (Uses JDK6 script engine support). The main goal of this plugin is to be able to configure/write update processors without the need to write and package Java code. The update request processor factory enables writing update processors in scripts located in {{solr.solr.home}} directory. The functory accepts one (mandatory) configuration parameter named {{scripts}} which accepts a comma-separated list of file names. It will look for these files under the {{conf}} directory in solr home. When multiple scripts are defined, their execution order is defined by the lexicographical order of the script file name (so {{scriptA.js}} will be executed before {{scriptB.js}}). The script language is resolved based on the script file extension (that is, a *.js files will be treated as a JavaScript script), therefore an extension is mandatory. Each script file is expected to have one or more methods with the same signature as the methods in the {{UpdateRequestProcessor}} interface. It is *not* required to define all methods, only those hat are required by the processing logic. The following variables are define as global variables for each script: * {{req}} - The SolrQueryRequest * {{rsp}}- The SolrQueryResponse * {{logger}} - A logger that can be used for logging purposes in the script -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405936#comment-13405936 ] Erik Hatcher commented on SOLR-1725: Do we really need to support multiple scripts inside a definition of a (Stateless)ScriptUpdateProcessorFactory? It just seems added looping when why not just define two different StatelessScriptUpdateProcessorFactory's each with an individual script? (or, combine the logic of the scripts into a single script if these were my scripts) Script based UpdateRequestProcessorFactory -- Key: SOLR-1725 URL: https://issues.apache.org/jira/browse/SOLR-1725 Project: Solr Issue Type: New Feature Components: update Affects Versions: 1.4 Reporter: Uri Boness Assignee: Erik Hatcher Labels: UpdateProcessor Fix For: 4.1 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch A script based UpdateRequestProcessorFactory (Uses JDK6 script engine support). The main goal of this plugin is to be able to configure/write update processors without the need to write and package Java code. The update request processor factory enables writing update processors in scripts located in {{solr.solr.home}} directory. The functory accepts one (mandatory) configuration parameter named {{scripts}} which accepts a comma-separated list of file names. It will look for these files under the {{conf}} directory in solr home. When multiple scripts are defined, their execution order is defined by the lexicographical order of the script file name (so {{scriptA.js}} will be executed before {{scriptB.js}}). The script language is resolved based on the script file extension (that is, a *.js files will be treated as a JavaScript script), therefore an extension is mandatory. Each script file is expected to have one or more methods with the same signature as the methods in the {{UpdateRequestProcessor}} interface. It is *not* required to define all methods, only those hat are required by the processing logic. The following variables are define as global variables for each script: * {{req}} - The SolrQueryRequest * {{rsp}}- The SolrQueryResponse * {{logger}} - A logger that can be used for logging purposes in the script -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3592) After reloading a SolrCore, some IndexWriter 'settings' changes will not take effect. Instead, you must restart Solr.
Mark Miller created SOLR-3592: - Summary: After reloading a SolrCore, some IndexWriter 'settings' changes will not take effect. Instead, you must restart Solr. Key: SOLR-3592 URL: https://issues.apache.org/jira/browse/SOLR-3592 Project: Solr Issue Type: Bug Reporter: Mark Miller Priority: Minor Fix For: 5.0 This is a known tradeoff that was made to more properly support our SolrCore model - there is a TODO: in the code in SolrCore#reload about it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3587) After reloading a SolrCore, the original Analyzer is still used rather than a new one.
[ https://issues.apache.org/jira/browse/SOLR-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3587: -- Summary: After reloading a SolrCore, the original Analyzer is still used rather than a new one. (was: Reloading a core is no longer updates analysis process) After reloading a SolrCore, the original Analyzer is still used rather than a new one. -- Key: SOLR-3587 URL: https://issues.apache.org/jira/browse/SOLR-3587 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Alexey Serba Assignee: Mark Miller Priority: Blocker Fix For: 4.0, 5.0 Attachments: SOLR-3587.patch, SOLR-3587.patch It's a usual practice in Solr to overwrite synonyms/stopwords files and issue a core reload command to force Solr reload these files. We've noticed that this trick is no longer working on trunk. I started debugging this problem in Eclipse and I can see that Solr actually re-reads updated synonym file on core reload, but indexing process still uses old reference to Synonym filter / SynonymMap instance. When I start Solr initially I can see that my Solr server has 2 instances of SynonymMap (the object that holds actual synonyms data). This is expected as I have 2 SynonymFilters defined in my schema (one field with 2 synonym filters - index + query time) After a core reload I see that Solr has 4 instances of this class. So I thought that there could be some leak and issued many (N) core reload commands and expected to see 2*N instances. It didn't happen. I can only see 20 instances of this class. This is pretty interesting number as well because, according to thread dump/list, Solr/Jetty has 10 working threads in thread pool (by default for example configs). So I suspect that we cache something in thread local storage and it hits us. I looked into the code and found that Lucene caches token streams in ThreadLocal, but I don't know the code enough to state that this is the problem. So I took a different approach and found the commit that introduced this bug. I wrote a simple test (shell script) and used _git bisect_ tool to chase this bug. {quote} ffd9c717448eca895d19be8ee9718bc399ac34a7 is the first bad commit commit ffd9c717448eca895d19be8ee9718bc399ac34a7 Author: Mark Robert Miller markrmil...@apache.org Date: Thu Jun 30 13:59:59 2011 + SOLR-2193, SOLR-2565: The default Solr update handler has been improved so that it uses fewer locks, keeps the IndexWriter open rather than closing it on each commit (ie commits no longer wait for background merges to complete), works with SolrCore to provide faster 'soft' commits, and has an improved API that requires less instanceof special casing. You may now specify a 'soft' commit when committing. This will use Lucene's NRT feature to avoid guaranteeing documents are on stable storage in exchange for faster reopen times. There is also a new 'soft' autocommit tracker that can be configured. SolrCores now properly share IndexWriters across SolrCore reloads. git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1141542 13f79535-47bb-0310-9956-ffa450edef68 {quote} It looks like sharing IndexWriters across SolrCore reloads could be the root cause, right? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3592) After reloading a SolrCore, some IndexWriter 'settings' changes will not take effect. Instead, you must restart Solr.
[ https://issues.apache.org/jira/browse/SOLR-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3592: -- Fix Version/s: 4.0 For 4 we should document this somewhere. After reloading a SolrCore, some IndexWriter 'settings' changes will not take effect. Instead, you must restart Solr. - Key: SOLR-3592 URL: https://issues.apache.org/jira/browse/SOLR-3592 Project: Solr Issue Type: Bug Reporter: Mark Miller Priority: Minor Fix For: 4.0, 5.0 This is a known tradeoff that was made to more properly support our SolrCore model - there is a TODO: in the code in SolrCore#reload about it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3587) After reloading a SolrCore, the original Analyzer is still used rather than a new one.
[ https://issues.apache.org/jira/browse/SOLR-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3587: -- Attachment: SOLR-3587.patch Another simple patch with alexey's test (now passes) that simply uses an updated analyzer. I made a new issue for the other IW settings. After reloading a SolrCore, the original Analyzer is still used rather than a new one. -- Key: SOLR-3587 URL: https://issues.apache.org/jira/browse/SOLR-3587 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Alexey Serba Assignee: Mark Miller Priority: Blocker Fix For: 4.0, 5.0 Attachments: SOLR-3587.patch, SOLR-3587.patch, SOLR-3587.patch It's a usual practice in Solr to overwrite synonyms/stopwords files and issue a core reload command to force Solr reload these files. We've noticed that this trick is no longer working on trunk. I started debugging this problem in Eclipse and I can see that Solr actually re-reads updated synonym file on core reload, but indexing process still uses old reference to Synonym filter / SynonymMap instance. When I start Solr initially I can see that my Solr server has 2 instances of SynonymMap (the object that holds actual synonyms data). This is expected as I have 2 SynonymFilters defined in my schema (one field with 2 synonym filters - index + query time) After a core reload I see that Solr has 4 instances of this class. So I thought that there could be some leak and issued many (N) core reload commands and expected to see 2*N instances. It didn't happen. I can only see 20 instances of this class. This is pretty interesting number as well because, according to thread dump/list, Solr/Jetty has 10 working threads in thread pool (by default for example configs). So I suspect that we cache something in thread local storage and it hits us. I looked into the code and found that Lucene caches token streams in ThreadLocal, but I don't know the code enough to state that this is the problem. So I took a different approach and found the commit that introduced this bug. I wrote a simple test (shell script) and used _git bisect_ tool to chase this bug. {quote} ffd9c717448eca895d19be8ee9718bc399ac34a7 is the first bad commit commit ffd9c717448eca895d19be8ee9718bc399ac34a7 Author: Mark Robert Miller markrmil...@apache.org Date: Thu Jun 30 13:59:59 2011 + SOLR-2193, SOLR-2565: The default Solr update handler has been improved so that it uses fewer locks, keeps the IndexWriter open rather than closing it on each commit (ie commits no longer wait for background merges to complete), works with SolrCore to provide faster 'soft' commits, and has an improved API that requires less instanceof special casing. You may now specify a 'soft' commit when committing. This will use Lucene's NRT feature to avoid guaranteeing documents are on stable storage in exchange for faster reopen times. There is also a new 'soft' autocommit tracker that can be configured. SolrCores now properly share IndexWriters across SolrCore reloads. git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1141542 13f79535-47bb-0310-9956-ffa450edef68 {quote} It looks like sharing IndexWriters across SolrCore reloads could be the root cause, right? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-4.x - Build # 26 - Failure
Build: https://builds.apache.org/job/Lucene-4.x/26/ All tests passed Build Log: [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Solr-4.x - Build # 27 - Failure
Build: https://builds.apache.org/job/Solr-4.x/27/ 1 tests failed. REGRESSION: org.apache.solr.cloud.RecoveryZkTest.testDistribSearch Error Message: There are still nodes recoverying Stack Trace: java.lang.AssertionError: There are still nodes recoverying at __randomizedtesting.SeedInfo.seed([55363C881408F96F:D4D0B29063579953]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistributedZkTestCase.waitForRecoveriesToFinish(AbstractDistributedZkTestCase.java:141) at org.apache.solr.cloud.RecoveryZkTest.doTest(RecoveryZkTest.java:86) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:679) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53) at org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551) Build Log: [...truncated 39510 lines...] [junit4] 2at java.util.concurrent.FutureTask.run(FutureTask.java:166) [junit4] 2at
RE: [JENKINS] Lucene-4.x - Build # 26 - Failure
Somehow, clover has a problem with one spatial class. BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build.xml:158: com.cenqua.clover.CloverException: java.io.FileNotFoundException: /usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/test/clover/reports/org/apache/lucene/spatial/PortedSolr3Test_testDistanceOrder__p0_recursive_geohash_p1_RecursivePrefixTreeStrategy_prefixGridScanLevel_8_SPG__GeohashPrefixTree_maxLevels_12_ctx_SimpleSpatialContext_units_KILOMETERS__calculator_Haversine__worldBounds_Rect_minX__180_0_maxX_180_0_minY__90_0_maxY_90_0___-1.html (File name too long) Can we rename the class or remove nesting or this too-long field/variable name? - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Apache Jenkins Server [mailto:jenk...@builds.apache.org] Sent: Tuesday, July 03, 2012 4:51 PM To: dev@lucene.apache.org Subject: [JENKINS] Lucene-4.x - Build # 26 - Failure Build: https://builds.apache.org/job/Lucene-4.x/26/ All tests passed Build Log: [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] [...truncated 38254 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-tests-only-4.x - Build # 201 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-4.x/201/ All tests passed Build Log: [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24316 lines...] javadocs-lint: [exec] Traceback (most recent call last): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/.. [exec] Crawl/parse... [exec] /dev-tools/scripts/checkJavadocLinks.py, line 218, in module [exec] if checkAll(sys.argv[1]): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/../dev-tools/scripts/checkJavadocLinks.py, line 131, in checkAll [exec] allFiles[fullPath] = parse(fullPath, open('%s/%s' % (root, f)).read()) [exec] File /usr/local/lib/python3.2/encodings/ascii.py, line 26, in decode [exec] return codecs.ascii_decode(input, self.errors)[0] [exec] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 3215: ordinal not in range(128) BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/build.xml:194: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/common-build.xml:1641: exec returned: 1 Total time: 2 minutes 23 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure [...truncated 24340 lines...] [...truncated 24340 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-tests-only-4.x - Build # 201 - Failure
Hmm I'll dig. Mike McCandless http://blog.mikemccandless.com On Tue, Jul 3, 2012 at 12:33 PM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-tests-only-4.x/201/ All tests passed Build Log: [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24316 lines...] javadocs-lint: [exec] Traceback (most recent call last): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/.. [exec] Crawl/parse... [exec] /dev-tools/scripts/checkJavadocLinks.py, line 218, in module [exec] if checkAll(sys.argv[1]): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/../dev-tools/scripts/checkJavadocLinks.py, line 131, in checkAll [exec] allFiles[fullPath] = parse(fullPath, open('%s/%s' % (root, f)).read()) [exec] File /usr/local/lib/python3.2/encodings/ascii.py, line 26, in decode [exec] return codecs.ascii_decode(input, self.errors)[0] [exec] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 3215: ordinal not in range(128) BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/build.xml:194: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/common-build.xml:1641: exec returned: 1 Total time: 2 minutes 23 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure [...truncated 24340 lines...] [...truncated 24340 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-tests-only-4.x - Build # 201 - Failure
OK I committed a fix... Mike McCandless http://blog.mikemccandless.com On Tue, Jul 3, 2012 at 12:40 PM, Michael McCandless luc...@mikemccandless.com wrote: Hmm I'll dig. Mike McCandless http://blog.mikemccandless.com On Tue, Jul 3, 2012 at 12:33 PM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-tests-only-4.x/201/ All tests passed Build Log: [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24340 lines...] [...truncated 24316 lines...] javadocs-lint: [exec] Traceback (most recent call last): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/.. [exec] Crawl/parse... [exec] /dev-tools/scripts/checkJavadocLinks.py, line 218, in module [exec] if checkAll(sys.argv[1]): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/../dev-tools/scripts/checkJavadocLinks.py, line 131, in checkAll [exec] allFiles[fullPath] = parse(fullPath, open('%s/%s' % (root, f)).read()) [exec] File /usr/local/lib/python3.2/encodings/ascii.py, line 26, in decode [exec] return codecs.ascii_decode(input, self.errors)[0] [exec] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 3215: ordinal not in range(128) BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/build.xml:194: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/lucene/common-build.xml:1641: exec returned: 1 Total time: 2 minutes 23 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure [...truncated 24340 lines...] [...truncated 24340 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-tests-only-trunk - Build # 14810 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/14810/ All tests passed Build Log: [...truncated 24170 lines...] [...truncated 24170 lines...] [...truncated 24170 lines...] [...truncated 24146 lines...] javadocs-lint: [exec] Traceback (most recent call last): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/ [exec] Crawl/parse... [exec] ../dev-tools/scripts/checkJavadocLinks.py, line 218, in module [exec] if checkAll(sys.argv[1]): [exec] File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/../dev-tools/scripts/checkJavadocLinks.py, line 131, in checkAll [exec] allFiles[fullPath] = parse(fullPath, open('%s/%s' % (root, f)).read()) [exec] File /usr/local/lib/python3.2/encodings/ascii.py, line 26, in decode [exec] return codecs.ascii_decode(input, self.errors)[0] [exec] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 3215: ordinal not in range(128) BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/build.xml:194: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/common-build.xml:1642: exec returned: 1 Total time: 2 minutes 27 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure [...truncated 24170 lines...] [...truncated 24170 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Linux-Java6-64 - Build # 365 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux-Java6-64/365/ All tests passed Build Log: [...truncated 18028 lines...] javadocs-lint: BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux-Java6-64/checkout/lucene/build.xml:194: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux-Java6-64/checkout/lucene/common-build.xml:1641: Execute failed: java.io.IOException: Cannot run program python3.2: java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at java.lang.Runtime.exec(Runtime.java:593) at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:862) at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:481) at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:495) at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:631) at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:672) at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:498) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.init(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) ... 44 more Total time: 1 minute 58 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3587) After reloading a SolrCore, the original Analyzer is still used rather than a new one.
[ https://issues.apache.org/jira/browse/SOLR-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3587. --- Resolution: Fixed Committed - thanks all! After reloading a SolrCore, the original Analyzer is still used rather than a new one. -- Key: SOLR-3587 URL: https://issues.apache.org/jira/browse/SOLR-3587 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Alexey Serba Assignee: Mark Miller Priority: Blocker Fix For: 4.0, 5.0 Attachments: SOLR-3587.patch, SOLR-3587.patch, SOLR-3587.patch It's a usual practice in Solr to overwrite synonyms/stopwords files and issue a core reload command to force Solr reload these files. We've noticed that this trick is no longer working on trunk. I started debugging this problem in Eclipse and I can see that Solr actually re-reads updated synonym file on core reload, but indexing process still uses old reference to Synonym filter / SynonymMap instance. When I start Solr initially I can see that my Solr server has 2 instances of SynonymMap (the object that holds actual synonyms data). This is expected as I have 2 SynonymFilters defined in my schema (one field with 2 synonym filters - index + query time) After a core reload I see that Solr has 4 instances of this class. So I thought that there could be some leak and issued many (N) core reload commands and expected to see 2*N instances. It didn't happen. I can only see 20 instances of this class. This is pretty interesting number as well because, according to thread dump/list, Solr/Jetty has 10 working threads in thread pool (by default for example configs). So I suspect that we cache something in thread local storage and it hits us. I looked into the code and found that Lucene caches token streams in ThreadLocal, but I don't know the code enough to state that this is the problem. So I took a different approach and found the commit that introduced this bug. I wrote a simple test (shell script) and used _git bisect_ tool to chase this bug. {quote} ffd9c717448eca895d19be8ee9718bc399ac34a7 is the first bad commit commit ffd9c717448eca895d19be8ee9718bc399ac34a7 Author: Mark Robert Miller markrmil...@apache.org Date: Thu Jun 30 13:59:59 2011 + SOLR-2193, SOLR-2565: The default Solr update handler has been improved so that it uses fewer locks, keeps the IndexWriter open rather than closing it on each commit (ie commits no longer wait for background merges to complete), works with SolrCore to provide faster 'soft' commits, and has an improved API that requires less instanceof special casing. You may now specify a 'soft' commit when committing. This will use Lucene's NRT feature to avoid guaranteeing documents are on stable storage in exchange for faster reopen times. There is also a new 'soft' autocommit tracker that can be configured. SolrCores now properly share IndexWriters across SolrCore reloads. git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1141542 13f79535-47bb-0310-9956-ffa450edef68 {quote} It looks like sharing IndexWriters across SolrCore reloads could be the root cause, right? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405962#comment-13405962 ] Lance Norskog commented on SOLR-1725: - What if I have a main file and a library file? How would that work? Script based UpdateRequestProcessorFactory -- Key: SOLR-1725 URL: https://issues.apache.org/jira/browse/SOLR-1725 Project: Solr Issue Type: New Feature Components: update Affects Versions: 1.4 Reporter: Uri Boness Assignee: Erik Hatcher Labels: UpdateProcessor Fix For: 4.1 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch A script based UpdateRequestProcessorFactory (Uses JDK6 script engine support). The main goal of this plugin is to be able to configure/write update processors without the need to write and package Java code. The update request processor factory enables writing update processors in scripts located in {{solr.solr.home}} directory. The functory accepts one (mandatory) configuration parameter named {{scripts}} which accepts a comma-separated list of file names. It will look for these files under the {{conf}} directory in solr home. When multiple scripts are defined, their execution order is defined by the lexicographical order of the script file name (so {{scriptA.js}} will be executed before {{scriptB.js}}). The script language is resolved based on the script file extension (that is, a *.js files will be treated as a JavaScript script), therefore an extension is mandatory. Each script file is expected to have one or more methods with the same signature as the methods in the {{UpdateRequestProcessor}} interface. It is *not* required to define all methods, only those hat are required by the processing logic. The following variables are define as global variables for each script: * {{req}} - The SolrQueryRequest * {{rsp}}- The SolrQueryResponse * {{logger}} - A logger that can be used for logging purposes in the script -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405974#comment-13405974 ] David Smiley commented on SOLR-1725: +1 to Erik's proposal of streamlining the common case of needing just processAdd() -- I shouldn't be required to even define a processAdd() method in my script for this common case. I'd rather not have subclasses for this. Perhaps the configuration in solrconfig for this processor would have a processAddScript=myscript.rb and a processDeleteScript= ... etc. as an alternative to script=script.rb -- if you use the one-file script then you are required to define each method. And, how about if scriptMethods=processAdd,processCommit then you just need to define the ones you list. Like Erik, I too have been following the awesome progress here as of late -- thanks for your hard work Hoss (and Erik before him). It has come far. Script based UpdateRequestProcessorFactory -- Key: SOLR-1725 URL: https://issues.apache.org/jira/browse/SOLR-1725 Project: Solr Issue Type: New Feature Components: update Affects Versions: 1.4 Reporter: Uri Boness Assignee: Erik Hatcher Labels: UpdateProcessor Fix For: 4.1 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch A script based UpdateRequestProcessorFactory (Uses JDK6 script engine support). The main goal of this plugin is to be able to configure/write update processors without the need to write and package Java code. The update request processor factory enables writing update processors in scripts located in {{solr.solr.home}} directory. The functory accepts one (mandatory) configuration parameter named {{scripts}} which accepts a comma-separated list of file names. It will look for these files under the {{conf}} directory in solr home. When multiple scripts are defined, their execution order is defined by the lexicographical order of the script file name (so {{scriptA.js}} will be executed before {{scriptB.js}}). The script language is resolved based on the script file extension (that is, a *.js files will be treated as a JavaScript script), therefore an extension is mandatory. Each script file is expected to have one or more methods with the same signature as the methods in the {{UpdateRequestProcessor}} interface. It is *not* required to define all methods, only those hat are required by the processing logic. The following variables are define as global variables for each script: * {{req}} - The SolrQueryRequest * {{rsp}}- The SolrQueryResponse * {{logger}} - A logger that can be used for logging purposes in the script -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Opportunity at O'Reilly Open Source Convention (OSCON)
Apache will be hosting a series of events in the two days prior to OSCON, the 16th and 17th of July. These events are a unique opportunity to engage new participants and build awareness for Lucene and Solr. The goal is to provide an environment where newcomers and veteran Apache contributors alike can learn, collaborate and engage with the various Apache communities. To support this, the Conference Committee (ConCom) is planning to use a mixture of bar camp, hackathon and other 'unconference' style tools to facilitate engagement at all levels; from hands on coding to high-level overview talks. As the events are exactly two weeks away, we are asking that any of our community members who will be at OSCON and are interested in helping grow Solr and Lucene, work with us and ConCom to organize our presence during the two days. ConCom will be preparing a section of their site and wiki to help advertise and coordinate the event. They have asked all interested projects to prepare a patch for their site with details of the project and what the project will be doing during the events (overview talks, hacking sessions, etc). It would be great to highlight new features available in just-released Lucene and Solr 4.0-alpha, including but not limited to: SolrCloud; near real-time search; flexible indexing and scoring; and per-document values (DocValues). Currently, they have organized the two days into the following four blocks of related projects: Monday: Social Widgets Big Data Content Tuesday Cloud Web Infrastructure Programming Productivity Please reply to this e-mail if you are interested in helping coordinate Solr and Lucene's presence at OSCON. We will work with ConCom to coordinate the next steps. Thank you in advance, The Lucene PMC - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4187) remove nightly/ directory under svn
Robert Muir created LUCENE-4187: --- Summary: remove nightly/ directory under svn Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3572) Make Schema-Browser show custom similarities
[ https://issues.apache.org/jira/browse/SOLR-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405987#comment-13405987 ] Greg Bowyer commented on SOLR-3572: --- If no one objects I will make this my first commit :D Make Schema-Browser show custom similarities Key: SOLR-3572 URL: https://issues.apache.org/jira/browse/SOLR-3572 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Attachments: SOLR-3572-similarity-schemabrowser.patch When a custom similarity is defined in the solr schema it is helpful to have the schema browser show the custom similarity -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1356901 - /lucene/cms/trunk/content/.htaccess
On Tue, Jul 3, 2012 at 4:03 PM, hoss...@apache.org wrote: +# oh shit, the 3.6.0 javadocs seem to have been deleted? +# temporarily redirect all old links to the new 4.0 docs +# really we should bring back the 3.6.0 docs, at an explicit path, and +# have /api redirect to then until 4.0 final comes out. +RedirectMatch temp /solr/api(.*) /solr/api-4_0_0-ALPHA$1 + + yes: i deleted these: since 4.0 alpha is the latest release, i renamed it and fix the links on the website. if you want older javadocs, please add them as api-X we don't need an ambiguous 'api' link that someone must figure out wtf to do with. -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Solr 3.6.0 javadocs are missing from the site
just brought this up on #lucene-dev .. the SOlr 3.6.0 javadocs are gone from the site, and all javadoc URLs were broken as a result Solr javadocs use to be unversioned and where here... http://lucene.staging.apache.org/solr/api ..until 5 minutes ago all links to those docs were broken until i added a quick .htaccess rule for it to redirect to the new 4.0-ALPHA docs past discussion when we switched to the cms and then released 3.6 was that we would start versioning the solr javadocs just like the lucene-core javadocs, and have both the latest 3.6.X and 4.X versions on the site at distinct urls -- the goal being that http://lucene.staging.apache.org/solr/api; could always redirect to the latest. I'm happy to do the htaccess work and tweak the MarkDown page content to make all this more clear -- but the first thing we need to do is get those 3.6.0 javadocs back up on the production site in some way, so we can start lining/redirecting back to them for existing 3.6 users. does anyone know how to do that w/o the way the extpaths.txt stuff works? -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr 3.6.0 javadocs are missing from the site
On Tue, Jul 3, 2012 at 4:16 PM, Chris Hostetter hossman_luc...@fucit.org wrote: just brought this up on #lucene-dev .. the SOlr 3.6.0 javadocs are gone from the site, and all javadoc URLs were broken as a result they aren't broken: i fixed all these api/ links across the website. where is a broken link? past discussion when we switched to the cms and then released 3.6 was that we would start versioning the solr javadocs just like the lucene-core -1 to the Release Manager doing refactoring on the website. RM does releasing. If people want to refactor things from unversioned to versioned, do that separately please. RM already has enough to do. javadocs, and have both the latest 3.6.X and 4.X versions on the site at distinct urls -- the goal being that http://lucene.staging.apache.org/solr/api; could always redirect to the latest. I don't like that: lucene doesnt need this. why does solr? -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4187) remove nightly/ directory under svn
[ https://issues.apache.org/jira/browse/LUCENE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406002#comment-13406002 ] Dawid Weiss commented on LUCENE-4187: - Since we're using ivy anyway lots of scripting opportunities open up (may be much much simpler to implement something in jython/ ruby/ beanshell/ groovy/ whatever than as an ANT task). remove nightly/ directory under svn --- Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4187) remove nightly/ directory under svn
[ https://issues.apache.org/jira/browse/LUCENE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406025#comment-13406025 ] Uwe Schindler commented on LUCENE-4187: --- Robert: Thaks for opening the issue. The windows builds are already also pure ant builds. I will start with the half-hourly builds. The nightlies are more complicated because they also copy artifacts outside the checkout tree, so also requires reconfiguring jenkins javadocs and artifact archival. remove nightly/ directory under svn --- Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2592) Pluggable shard lookup mechanism for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Garski updated SOLR-2592: - Attachment: SOLR-2592_rev_2.patch Updated patch (SOLR-2592_rev_2.patch) to remove custom similarities from the test configs as they were causing the test to hang. Pluggable shard lookup mechanism for SolrCloud -- Key: SOLR-2592 URL: https://issues.apache.org/jira/browse/SOLR-2592 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.0 Reporter: Noble Paul Attachments: SOLR-2592.patch, SOLR-2592_rev_2.patch, dbq_fix.patch, pluggable_sharding.patch, pluggable_sharding_V2.patch If the data in a cloud can be partitioned on some criteria (say range, hash, attribute value etc) It will be easy to narrow down the search to a smaller subset of shards and in effect can achieve more efficient search. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows-Java6-64 - Build # 225 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows-Java6-64/225/ 1 tests failed. REGRESSION: org.apache.solr.spelling.suggest.SuggesterTSTTest.testSuggestions Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([73D37EB26457640F:D50F4747AA6333F1]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:461) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:428) at org.apache.solr.spelling.suggest.SuggesterTest.testSuggestions(SuggesterTest.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53) at org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//lst[@name='spellcheck']/lst[@name='suggestions']/lst[@name='ac']/int[@name='numFound'][.='2'] xml response was: ?xml version=1.0 encoding=UTF-8? response lst name=responseHeaderint name=status0/intint
[jira] [Commented] (LUCENE-4187) remove nightly/ directory under svn
[ https://issues.apache.org/jira/browse/LUCENE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406036#comment-13406036 ] Dawid Weiss commented on LUCENE-4187: - In general +10 for the idea -- this is important not only for jenkins but for any build system (I myself use atlassian bamboo). remove nightly/ directory under svn --- Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4187) remove nightly/ directory under svn
[ https://issues.apache.org/jira/browse/LUCENE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406039#comment-13406039 ] Robert Muir commented on LUCENE-4187: - +1, lets get those fixed first. Then anyone can easily setup a jenkins to run tests + checks remove nightly/ directory under svn --- Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4187) remove nightly/ directory under svn
[ https://issues.apache.org/jira/browse/LUCENE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406041#comment-13406041 ] Robert Muir commented on LUCENE-4187: - and also you could use the 'ant jenkins' locally when you want to run these checks yourself. this is not really easy today (i run ant clean test javadocs-lint rat-sources or something like that) remove nightly/ directory under svn --- Key: LUCENE-4187 URL: https://issues.apache.org/jira/browse/LUCENE-4187 Project: Lucene - Java Issue Type: Task Components: general/build Reporter: Robert Muir I setup a jenkins while i was on vacation, it was really easy to just tell it to run an ant task. I don't like this second set of shell scripts we have: its not portable to windows and harder to setup. We should make it dead easy to setup a jenkins if someone feels inclined to run our tests. I think we should add, say two jenkins tasks 'ant jenkins' and 'ant jenkins-nightly' or something like that. For this to work we need to port some of the stuff the scripts are doing, such as nocommit checks, to ant. Any other parameters should be exposed as ordinary ant properties so they can be overridden with -D's -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: Solr 3.6.0 javadocs are missing from the site
I restored the Solr 3.6.0 javadocs at http://lucene.apache.org/solr/api-3_6_0/: - Added line solr/api-3_6_0 to extpaths.txt on the CMS source tree. - Restored and renamed the javadocs directory: svn -m bring Solr 3.6.0 javadocs back to life copy \ https://svn.apache.org/repos/infra/websites/production/lucene/content/solr/api@824226 \ https://svn.apache.org/repos/infra/websites/production/lucene/content/solr/api-3_6_0 Steve -Original Message- From: Robert Muir [mailto:rcm...@gmail.com] Sent: Tuesday, July 03, 2012 4:22 PM To: dev@lucene.apache.org Subject: Re: Solr 3.6.0 javadocs are missing from the site On Tue, Jul 3, 2012 at 4:16 PM, Chris Hostetter hossman_luc...@fucit.org wrote: just brought this up on #lucene-dev .. the SOlr 3.6.0 javadocs are gone from the site, and all javadoc URLs were broken as a result they aren't broken: i fixed all these api/ links across the website. where is a broken link? past discussion when we switched to the cms and then released 3.6 was that we would start versioning the solr javadocs just like the lucene-core -1 to the Release Manager doing refactoring on the website. RM does releasing. If people want to refactor things from unversioned to versioned, do that separately please. RM already has enough to do. javadocs, and have both the latest 3.6.X and 4.X versions on the site at distinct urls -- the goal being that http://lucene.staging.apache.org/solr/api; could always redirect to the latest. I don't like that: lucene doesnt need this. why does solr? -- lucidimagination.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3585) processing updates in multiple threads
[ https://issues.apache.org/jira/browse/SOLR-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-3585: --- Attachment: SOLR-3585.patch Folks, why are you so skeptic? Don't you need utilize your boxes to get indexing happen faster? The also potential usage which I see is returning multithreading to DIH back by keeping the old core single threaded, then putting this multithread consumer behind and pushing into from improved datasources which will have buffer and separate threads. ok. here is java doc from the recent patch. most update is correct and comprehensive halting implementation. /** * Forks incoming adds into multiple threads. * Instead of regular chaining it chains on it's own with chain specified in * backing.chain update parameter. * supports two solrconfig.xml parameters ul * li * bufferSize length of the buffer for add commands * /li * lipipesNumber number of parallel threads for handling updates/li * /ul * it spawns own cached thread pool, and shutdown it on core closing. * AddUpdate commands are queued for background execution by spawned processors. * All other commands awaits until all queued tasks are completed, and then are * handled in the calling thread (request). * if one of the update processors threads (aka pipes) catches exception, all other pipes * are stopped synchronously, but not in the same moment, and the root cause is propagated * into request thread. * for queuing spin-delay pattern is used to prevent dead lock when buffer is full, and * queuing thread is blocked, but buffer can't be processed due to dead pipes. * */ processing updates in multiple threads -- Key: SOLR-3585 URL: https://issues.apache.org/jira/browse/SOLR-3585 Project: Solr Issue Type: Improvement Components: update Affects Versions: 4.0 Reporter: Mikhail Khludnev Priority: Minor Attachments: SOLR-3585.patch, multithreadupd.patch Hello, I'd like to contribute update processor which forks many threads which concurrently process the stream of commands. It may be beneficial for users who streams many docs through single request. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3572) Make Schema-Browser show custom similarities
[ https://issues.apache.org/jira/browse/SOLR-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406074#comment-13406074 ] Stefan Matheis (steffkes) commented on SOLR-3572: - Hey Greg, just a short not, we have {{String.prototype.esc}} for escaping Strings in the UI -- just for being sure that we have no injections. if you could change that, i'm +1 for the UI Part : Make Schema-Browser show custom similarities Key: SOLR-3572 URL: https://issues.apache.org/jira/browse/SOLR-3572 Project: Solr Issue Type: Bug Affects Versions: 4.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Attachments: SOLR-3572-similarity-schemabrowser.patch When a custom similarity is defined in the solr schema it is helpful to have the schema browser show the custom similarity -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4099) Remove generics from SpatialStrategy and remove SpatialFieldInfo
[ https://issues.apache.org/jira/browse/LUCENE-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406089#comment-13406089 ] Ryan McKinley commented on LUCENE-4099: --- This looks fine, it looks like the one impact is that the Strategy would be instanciated once for each field rather then once for each fieldType (in solr terms) Remove generics from SpatialStrategy and remove SpatialFieldInfo Key: LUCENE-4099 URL: https://issues.apache.org/jira/browse/LUCENE-4099 Project: Lucene - Java Issue Type: Improvement Components: modules/spatial Reporter: Chris Male Assignee: David Smiley Priority: Minor Attachments: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Same time ago I added SpatialFieldInfo as a way for SpatialStrategys to declare what information they needed per request. This meant that a Strategy could be used across multiple requests. However it doesn't really need to be that way any more, Strategies are light to instantiate and the generics are just clumsy and annoying. Instead Strategies should just define what they need in their constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr 3.6.0 javadocs are missing from the site
: just brought this up on #lucene-dev .. the SOlr 3.6.0 javadocs are gone from : the site, and all javadoc URLs were broken as a result : : they aren't broken: i fixed all these api/ links across the website. : where is a broken link? Until i added the redirect in .htaccess (so they would start pointing to hte 4.0-alpha docs), java URLs like this one (from google, the wiki, bookmarks, blogs, etc...) were all broken... http://lucene.apache.org/solr/api/org/apache/solr/schema/ExternalFileField.html : past discussion when we switched to the cms and then released 3.6 was that : we would start versioning the solr javadocs just like the lucene-core : : -1 to the Release Manager doing refactoring on the website. RM does : releasing. If people want to refactor things from unversioned to : versioned, do that separately please. RM already has enough to do. Who the fuck said anything about expecting the RM to refactor the website? i didnt expect you to do anything special with the javadocs, just upload the new ones (which i know was a big ass fucking pain and i appreciate how much time/effort you put into dealing with it) and then other people (like me) could volunteer to add markdown pages, fix links, etc... if it really bothered you that the 3.6.0 javadocs were using the url /solr/api and you wanted them to be something else that's fine, you could have done an svn mv at anytime -- that didn't have to be something you took it upon yourself to do as the release manager. My concerns are/were simply: a) the solr 3.6 javadocs should still be online at some path for the forseeable future -- just like hte lucene-core 3.6 javadocs are. b) we shouldn't break existing URLs -- since the /solr/api/... URLs have been arround for a long time, and are linked to from lots of places, we should do what we can to make sure they always redirect to somewhere useful. I'm happy to deal with all of this, i'm just asking folks to not delete valid and useful docs fro mthe website. if you don't like the URL/path, then move them, but please don't delete them. : javadocs, and have both the latest 3.6.X and 4.X versions on the site at : distinct urls -- the goal being that : http://lucene.staging.apache.org/solr/api; could always redirect to the : latest. : : I don't like that: lucene doesnt need this. why does solr? Because the URL has always existed and people use it. it needs to point/redirect somwhere, and it's trivial (by editing .htaccess) for it to point to the most revent version, so we might as well to that. -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr 3.6.0 javadocs are missing from the site
my problem is the unversioned url. with this links break, but in subtle ways. another horrible example of this is old news blurbs for previous releases, because they use an unversioned redirect url so they go to the wrong place. that's why I removed old news blurbs. these unversioned links should die, die, die. but can we compromise here? instead can we have api-3 and api-4, etc. we realize some external links could still break, but given our back compat policy it should be rare... On Jul 3, 2012 5:58 PM, Chris Hostetter hossman_luc...@fucit.org wrote: : just brought this up on #lucene-dev .. the SOlr 3.6.0 javadocs are gone from : the site, and all javadoc URLs were broken as a result : : they aren't broken: i fixed all these api/ links across the website. : where is a broken link? Until i added the redirect in .htaccess (so they would start pointing to hte 4.0-alpha docs), java URLs like this one (from google, the wiki, bookmarks, blogs, etc...) were all broken... http://lucene.apache.org/solr/api/org/apache/solr/schema/ExternalFileField.html : past discussion when we switched to the cms and then released 3.6 was that : we would start versioning the solr javadocs just like the lucene-core : : -1 to the Release Manager doing refactoring on the website. RM does : releasing. If people want to refactor things from unversioned to : versioned, do that separately please. RM already has enough to do. Who the fuck said anything about expecting the RM to refactor the website? i didnt expect you to do anything special with the javadocs, just upload the new ones (which i know was a big ass fucking pain and i appreciate how much time/effort you put into dealing with it) and then other people (like me) could volunteer to add markdown pages, fix links, etc... if it really bothered you that the 3.6.0 javadocs were using the url /solr/api and you wanted them to be something else that's fine, you could have done an svn mv at anytime -- that didn't have to be something you took it upon yourself to do as the release manager. My concerns are/were simply: a) the solr 3.6 javadocs should still be online at some path for the forseeable future -- just like hte lucene-core 3.6 javadocs are. b) we shouldn't break existing URLs -- since the /solr/api/... URLs have been arround for a long time, and are linked to from lots of places, we should do what we can to make sure they always redirect to somewhere useful. I'm happy to deal with all of this, i'm just asking folks to not delete valid and useful docs fro mthe website. if you don't like the URL/path, then move them, but please don't delete them. : javadocs, and have both the latest 3.6.X and 4.X versions on the site at : distinct urls -- the goal being that : http://lucene.staging.apache.org/solr/api; could always redirect to the : latest. : : I don't like that: lucene doesnt need this. why does solr? Because the URL has always existed and people use it. it needs to point/redirect somwhere, and it's trivial (by editing .htaccess) for it to point to the most revent version, so we might as well to that. -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr 3.6.0 javadocs are missing from the site
: these unversioned links should die, die, die. but can we compromise here? : instead can we have api-3 and api-4, etc. we realize some external links : could still break, but given our back compat policy it should be rare... that's exactly what i'm saying .. sarowe alredy put the 3.x javadocs back on the site using api-3_... i'm just fixing up a few places to link to them where appropate. but anyone who has old bookmarked links to /solr/api/... should have those links go somwhere as long as the classes still exist (also already fixed) -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3593) add /solr/api/index.html
Hoss Man created SOLR-3593: -- Summary: add /solr/api/index.html Key: SOLR-3593 URL: https://issues.apache.org/jira/browse/SOLR-3593 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Hoss Man solr historically only had one version of the javadocs on the site at a time. particularly now that we have 3.6.X and 4.X concurrently, this needs to change. both sets of javadoc are already on the site, and /solr/tutorial.html already links to both versions appropriately but there are still some improvements that should be made... * add a /solr/api/index.html file that mirrors the type of inof listed on /core/documentation.html ** we could use the same documentation.html name, but since historically lots of people have bookmarked/linked to /solr/api reusing that path as the landing page for finding docs about multiple versions seems better ** making this visible will probably mean needing to dial in the existing /solr/api redirect more * update the Javadocs link i nthe right nav to link to this page -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3593) add /solr/api/index.html
[ https://issues.apache.org/jira/browse/SOLR-3593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406115#comment-13406115 ] Hoss Man commented on SOLR-3593: FWIW: i'm happy to do this, i only created the issue because i don't have time to do it today and i didn't wnat to forget ... but if anybody else wants to jump on it knock yourself out. add /solr/api/index.html -- Key: SOLR-3593 URL: https://issues.apache.org/jira/browse/SOLR-3593 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Hoss Man solr historically only had one version of the javadocs on the site at a time. particularly now that we have 3.6.X and 4.X concurrently, this needs to change. both sets of javadoc are already on the site, and /solr/tutorial.html already links to both versions appropriately but there are still some improvements that should be made... * add a /solr/api/index.html file that mirrors the type of inof listed on /core/documentation.html ** we could use the same documentation.html name, but since historically lots of people have bookmarked/linked to /solr/api reusing that path as the landing page for finding docs about multiple versions seems better ** making this visible will probably mean needing to dial in the existing /solr/api redirect more * update the Javadocs link i nthe right nav to link to this page -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr 3.6.0 javadocs are missing from the site
that's not what I mean. api/ should die. but I'm ok with api-3 that points to 3.6.0 and so on as the links should never disappear within a major release (except heavy breaks) On Jul 3, 2012 6:19 PM, Chris Hostetter hossman_luc...@fucit.org wrote: : these unversioned links should die, die, die. but can we compromise here? : instead can we have api-3 and api-4, etc. we realize some external links : could still break, but given our back compat policy it should be rare... that's exactly what i'm saying .. sarowe alredy put the 3.x javadocs back on the site using api-3_... i'm just fixing up a few places to link to them where appropate. but anyone who has old bookmarked links to /solr/api/... should have those links go somwhere as long as the classes still exist (also already fixed) -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows-Java6-64 - Build # 681 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows-Java6-64/681/ 1 tests failed. REGRESSION: org.apache.solr.spelling.suggest.SuggesterFSTTest.testSuggestions Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([288E520B2217BF89:8E526BFEEC23E877]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:461) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:428) at org.apache.solr.spelling.suggest.SuggesterTest.testSuggestions(SuggesterTest.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53) at org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551) Caused by: java.lang.NullPointerException at org.apache.lucene.search.suggest.fst.FSTCompletionLookup.lookup(FSTCompletionLookup.java:239) at org.apache.solr.spelling.suggest.Suggester.getSuggestions(Suggester.java:188) at
RE: Solr 3.6.0 javadocs are missing from the site
I would make a perm redirect (301) from api - api3. Api3 - api 3.6.0 would be a 302 redirect, as it is official. By this browsers will update their bookmarks and google bot will forget about the dying api, too. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de/ http://www.thetaphi.de eMail: u...@thetaphi.de From: Robert Muir [mailto:rcm...@gmail.com] Sent: Wednesday, July 04, 2012 12:23 AM To: dev@lucene.apache.org Subject: Re: Solr 3.6.0 javadocs are missing from the site that's not what I mean. api/ should die. but I'm ok with api-3 that points to 3.6.0 and so on as the links should never disappear within a major release (except heavy breaks) On Jul 3, 2012 6:19 PM, Chris Hostetter hossman_luc...@fucit.org wrote: : these unversioned links should die, die, die. but can we compromise here? : instead can we have api-3 and api-4, etc. we realize some external links : could still break, but given our back compat policy it should be rare... that's exactly what i'm saying .. sarowe alredy put the 3.x javadocs back on the site using api-3_... i'm just fixing up a few places to link to them where appropate. but anyone who has old bookmarked links to /solr/api/... should have those links go somwhere as long as the classes still exist (also already fixed) -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4161) Make PackedInts usable by codecs
[ https://issues.apache.org/jira/browse/LUCENE-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-4161: - Attachment: LUCENE-4161.patch New patch. I renamed 'n' to 'iterations', fixed the style issues and improved documentation. All core tests pass, including the backward-compatibility tests I added in r1356228. I think this is a good idea to work on int[] encoding/decoding in a separate issue given how big this patch already is. Make PackedInts usable by codecs Key: LUCENE-4161 URL: https://issues.apache.org/jira/browse/LUCENE-4161 Project: Lucene - Java Issue Type: Improvement Components: core/store Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Attachments: LUCENE-4161.patch, LUCENE-4161.patch Some codecs might be interested in using PackedInts.{Writer,Reader,ReaderIterator} to read and write fixed-size values efficiently. The problem is that the serialization format is self contained, and always writes the name of the codec, its version, its number of bits per value and its format. For example, if you want to use packed ints to store your postings list, this is a lot of overhead (at least ~60 bytes per term, in case you only use one Writer per term, more otherwise). Users should be able to externalize the storage of metadata to save space. For example, to use PackedInts to store a postings list, one should be able to store the codec name, its version and the number of bits per doc in the header of the terms+postings list instead of having to write it once (or more!) per term. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4080) SegmentReader.numDeletedDocs() sometimes gives an incorrect numDeletedDocs
[ https://issues.apache.org/jira/browse/LUCENE-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406168#comment-13406168 ] Michael McCandless commented on LUCENE-4080: New patch looks great! I like this solution: much simpler. Maybe just add a comment explaining why we must sometimes make a new reader? (Ie, that deletes could have snuck in after we pulled the merge reader but before the sync block where we get the live docs)? It's nice to simply pass around the reader and not the pair of liveDocs + reader... We can remove liveDocs and delCount args to SegmentMerger.add now? +1, thanks Adrien! SegmentReader.numDeletedDocs() sometimes gives an incorrect numDeletedDocs -- Key: LUCENE-4080 URL: https://issues.apache.org/jira/browse/LUCENE-4080 Project: Lucene - Java Issue Type: Bug Components: core/index Affects Versions: 4.0, 4.1 Reporter: Adrien Grand Priority: Trivial Fix For: 4.1 Attachments: LUCENE-4080.patch, LUCENE-4080.patch, LUCENE-4080.patch At merge time, SegmentReader sometimes gives an incorrect value for numDeletedDocs. From LUCENE-2357: bq. As far as I know, [SegmenterReader.numDeletedDocs() is] only unreliable in this context (SegmentReader passed to SegmentMerger for merging); this is because we allow newly marked deleted docs to happen concurrently up until the moment we need to pass the SR instance to the merger (search for // Must sync to ensure BufferedDeletesStream in IndexWriter.java) ... but it would be nice to fix that, so I think open a new issue (it won't block this one)? We should be able to make a new SR instance, sharing the same core as the current one but using the correct delCount... bq. It would be cleaner (but I think hairier) to create a new SR for merging that holds the correct delCount, but let's do that under the separate issue. bq. it would be best if the SegmentReader's numDeletedDocs were always correct, but, fixing that in IndexWriter is somewhat tricky. Ie, the fix could be hairy but the end result (SegmentReader.numDeletedDocs can always be trusted) would be cleaner... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4161) Make PackedInts usable by codecs
[ https://issues.apache.org/jira/browse/LUCENE-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406190#comment-13406190 ] Michael McCandless commented on LUCENE-4161: +1, patch looks great! Make PackedInts usable by codecs Key: LUCENE-4161 URL: https://issues.apache.org/jira/browse/LUCENE-4161 Project: Lucene - Java Issue Type: Improvement Components: core/store Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Attachments: LUCENE-4161.patch, LUCENE-4161.patch Some codecs might be interested in using PackedInts.{Writer,Reader,ReaderIterator} to read and write fixed-size values efficiently. The problem is that the serialization format is self contained, and always writes the name of the codec, its version, its number of bits per value and its format. For example, if you want to use packed ints to store your postings list, this is a lot of overhead (at least ~60 bytes per term, in case you only use one Writer per term, more otherwise). Users should be able to externalize the storage of metadata to save space. For example, to use PackedInts to store a postings list, one should be able to store the codec name, its version and the number of bits per doc in the header of the terms+postings list instead of having to write it once (or more!) per term. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406247#comment-13406247 ] Hoss Man commented on SOLR-1725: bq. I think this just demands that we implement an add-only type of capability such that the entire script is implicitly inside a processAdd call. ... bq. Perhaps the configuration in solrconfig for this processor would have a processAddScript=myscript.rb and a processDeleteScript= ... etc. as an alternative to script=script.rb That's all fine and good and i have no objection to any of it -- but as i tried to explain before those aternative ideas still have a raft of questions related to what the lifecyle of the scripts should be, what the bindings should be for the relevant objects (SolrQueryRequest, AddDocCmd, etc...) and how they should be evaluated (CompiledScript vs script that is evaled on every processAdd, etc...). Hence my point that i think we should commit this as StatelessScriptUpdateProcessorFactory, where the cript processing mirrors the lifecylce of a native java UpdateProcessor and iterate with other approaches, using other factories, in other jira issues -- if we can refactor common stuff out then great, but we shouldn't try to over think generalizing the internals of this implementation in anticipation of a hypothetical future class that will likely be just as easy to write independently. bq. Do we really need to support multiple scripts inside a definition of a (Stateless)ScriptUpdateProcessorFactory? It just seems added looping when why not just define two different StatelessScriptUpdateProcessorFactory's each with an individual script? (or, combine the logic of the scripts into a single script if these were my scripts) good question. That looping code was already there when i started looking at the patch -- i left it in mainly because: * it was already written and didn't overly complicate things * it seemed like it would probably be easier/simpler for a lot of users to just add a {{str name=scriptfoo.js/str}} when they wanted to add a script then to add an entire new {{processor.../processor}} * we use a single ScriptEngineManager per request, per UpdatePocessor instance. _In theory_ it will be more efficient for some languages the to generate ScriptEngines for each script from the same ScriptEngineManager then from distinct ScriptEngineManagers (ie: imagine if your scripting langauge was Java: configuring two scripts in a single {{processor}} means you spin up one JVM per request; if you put each script in it's own {{processor}} you spin up 2 JVMs per request) * according to the javax.script javadocs, because we use a single ScriptEngineManager per request then _in theory_ any variable in global scope will be common across all the script files (for that request). (In my JVM, this doesn't work for multiple javascript scripts that try to refer to the same global vars, no idea if other javac.script implementations support it) bq. What if I have a main file and a library file? How would that work? No freaking clue. .. * The javax.script APIs provide no mechanism for Java code to specify that modules should be loaded before evaluating a script, or any way to configure where the engine should look for modules if a script attempts to load them using it's own native syntax * javascript doesn't even have a language mechanism (that i know of) for a script file to specify that it wants to import another file/script so i don't even know of a decent way to test what happens if you try in a lanaguge that does ... (ie: will it try to use the ScriptEngineManager's classloader? will it try to read from the system default path for that language's libs?) Script based UpdateRequestProcessorFactory -- Key: SOLR-1725 URL: https://issues.apache.org/jira/browse/SOLR-1725 Project: Solr Issue Type: New Feature Components: update Affects Versions: 1.4 Reporter: Uri Boness Assignee: Erik Hatcher Labels: UpdateProcessor Fix For: 4.1 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch A script based UpdateRequestProcessorFactory (Uses JDK6 script engine support). The main goal of this plugin is to be able to configure/write update processors without the need to write and package Java code. The update request processor factory enables writing update processors in scripts located in {{solr.solr.home}} directory. The functory accepts one (mandatory) configuration parameter named {{scripts}} which accepts a
Re: svn commit: r1357027 - /lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGe t.java
Yonik: would make sense to at least trigger this in the nightly?... final int deleteByQueryPercent = TEST_NIGHTLY ? (1+random().nextInt(7)) : 0; : tests: disable stress DBQ reorder : : Modified: : lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java : : Modified: lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java : URL: http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java?rev=1357027r1=1357026r2=1357027view=diff : == : --- lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java (original) : +++ lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java Wed Jul 4 00:54:53 2012 : @@ -1014,7 +1014,7 @@ public class TestRealTimeGet extends Sol : final int commitPercent = 5 + random().nextInt(20); : final int softCommitPercent = 30+random().nextInt(75); // what percent of the commits are soft : final int deletePercent = 4+random().nextInt(25); : -final int deleteByQueryPercent = 1+random().nextInt(7); : +final int deleteByQueryPercent = 0; // 1+random().nextInt(7); : final int ndocs = 5 + (random().nextBoolean() ? random().nextInt(25) : random().nextInt(200)); : int nWriteThreads = 5 + random().nextInt(25); -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4099) Remove generics from SpatialStrategy and remove SpatialFieldInfo
[ https://issues.apache.org/jira/browse/LUCENE-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406250#comment-13406250 ] David Smiley commented on LUCENE-4099: -- It occurred to me that if Strategy is going to contain the field name, then it might as well contain isStored() isIndexed() as well. On the Solr side, I imagine the field type would do a lookup in a small HashMap of fieldName-Strategy. I'm not sure what the concurrency model is but I figure it needs to be thread-safe. Remove generics from SpatialStrategy and remove SpatialFieldInfo Key: LUCENE-4099 URL: https://issues.apache.org/jira/browse/LUCENE-4099 Project: Lucene - Java Issue Type: Improvement Components: modules/spatial Reporter: Chris Male Assignee: David Smiley Priority: Minor Attachments: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Same time ago I added SpatialFieldInfo as a way for SpatialStrategys to declare what information they needed per request. This meant that a Strategy could be used across multiple requests. However it doesn't really need to be that way any more, Strategies are light to instantiate and the generics are just clumsy and annoying. Instead Strategies should just define what they need in their constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4099) Remove generics from SpatialStrategy and remove SpatialFieldInfo
[ https://issues.apache.org/jira/browse/LUCENE-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406251#comment-13406251 ] Chris Male commented on LUCENE-4099: I think we need to reconsider storing as part of the Strategys, the logic doesn't really make sense. I'll open an issue. Remove generics from SpatialStrategy and remove SpatialFieldInfo Key: LUCENE-4099 URL: https://issues.apache.org/jira/browse/LUCENE-4099 Project: Lucene - Java Issue Type: Improvement Components: modules/spatial Reporter: Chris Male Assignee: David Smiley Priority: Minor Attachments: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Same time ago I added SpatialFieldInfo as a way for SpatialStrategys to declare what information they needed per request. This meant that a Strategy could be used across multiple requests. However it doesn't really need to be that way any more, Strategies are light to instantiate and the generics are just clumsy and annoying. Instead Strategies should just define what they need in their constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4099) Remove generics from SpatialStrategy and remove SpatialFieldInfo
[ https://issues.apache.org/jira/browse/LUCENE-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406253#comment-13406253 ] Chris Male commented on LUCENE-4099: LUCENE-4188 Remove generics from SpatialStrategy and remove SpatialFieldInfo Key: LUCENE-4099 URL: https://issues.apache.org/jira/browse/LUCENE-4099 Project: Lucene - Java Issue Type: Improvement Components: modules/spatial Reporter: Chris Male Assignee: David Smiley Priority: Minor Attachments: LUCENE-4099_remove_SpatialFieldInfo_and_put_fieldName_into_Strategy_instead_of_methods.patch Same time ago I added SpatialFieldInfo as a way for SpatialStrategys to declare what information they needed per request. This meant that a Strategy could be used across multiple requests. However it doesn't really need to be that way any more, Strategies are light to instantiate and the generics are just clumsy and annoying. Instead Strategies should just define what they need in their constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4188) Storing Shapes shouldn't be Strategy dependent
Chris Male created LUCENE-4188: -- Summary: Storing Shapes shouldn't be Strategy dependent Key: LUCENE-4188 URL: https://issues.apache.org/jira/browse/LUCENE-4188 Project: Lucene - Java Issue Type: Bug Components: modules/spatial Reporter: Chris Male The logic for storing Shape representations seems to be different for each Strategy. The PrefixTreeStrategy impls store the Shape in WKT, which is nice if you're using WKT but not much help if you're not. BBoxStrategy doesn't actually store the Shape itself, but a representation of the bounding box. TwoDoubles seems to follow the PrefixTreeStrategy approach, which is surprising since it only indexes Points and they could be stored without using WKT. I think we need to consider what storing a Shape means. If we want to store the Shape itself, then that logic should be standardised and done outside of the Strategys since it is not really related to them. If we want to store the terms being used by the Strategys to make Shapes queryable, then we need to change the logic in the Strategys to actually do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4188) Storing Shapes shouldn't be Strategy dependent
[ https://issues.apache.org/jira/browse/LUCENE-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406270#comment-13406270 ] David Smiley commented on LUCENE-4188: -- I think storing a shape should universally be SpatialContext.toString(shape); This is just a simple method call for the strategy to invoke. If/when Lucene separates out better stored fields from indexed ones then the Strategy need not bother. Storing Shapes shouldn't be Strategy dependent -- Key: LUCENE-4188 URL: https://issues.apache.org/jira/browse/LUCENE-4188 Project: Lucene - Java Issue Type: Bug Components: modules/spatial Reporter: Chris Male The logic for storing Shape representations seems to be different for each Strategy. The PrefixTreeStrategy impls store the Shape in WKT, which is nice if you're using WKT but not much help if you're not. BBoxStrategy doesn't actually store the Shape itself, but a representation of the bounding box. TwoDoubles seems to follow the PrefixTreeStrategy approach, which is surprising since it only indexes Points and they could be stored without using WKT. I think we need to consider what storing a Shape means. If we want to store the Shape itself, then that logic should be standardised and done outside of the Strategys since it is not really related to them. If we want to store the terms being used by the Strategys to make Shapes queryable, then we need to change the logic in the Strategys to actually do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4188) Storing Shapes shouldn't be Strategy dependent
[ https://issues.apache.org/jira/browse/LUCENE-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406278#comment-13406278 ] Chris Male commented on LUCENE-4188: I think it is bizarre to first parse the String to a Shape, pass the Shape into the Strategy, and then the Strategy to ask for the String again. I think we should let the user decide what they want to store and how it should be stored. Even if we do decide to use SpatialContext.toString as the default behavior, it doesn't belong inside the Strategys. Solr can create a separate Field instance for the stored value and so can other users of the module. Storing Shapes shouldn't be Strategy dependent -- Key: LUCENE-4188 URL: https://issues.apache.org/jira/browse/LUCENE-4188 Project: Lucene - Java Issue Type: Bug Components: modules/spatial Reporter: Chris Male The logic for storing Shape representations seems to be different for each Strategy. The PrefixTreeStrategy impls store the Shape in WKT, which is nice if you're using WKT but not much help if you're not. BBoxStrategy doesn't actually store the Shape itself, but a representation of the bounding box. TwoDoubles seems to follow the PrefixTreeStrategy approach, which is surprising since it only indexes Points and they could be stored without using WKT. I think we need to consider what storing a Shape means. If we want to store the Shape itself, then that logic should be standardised and done outside of the Strategys since it is not really related to them. If we want to store the terms being used by the Strategys to make Shapes queryable, then we need to change the logic in the Strategys to actually do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org