[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.6.0_35) - Build # 1127 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1127/ Java: 32bit/jdk1.6.0_35 -client -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 20518 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1642 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for
[jira] [Created] (LUCENE-4391) Lucene40Codec methods should be final
Adrien Grand created LUCENE-4391: Summary: Lucene40Codec methods should be final Key: LUCENE-4391 URL: https://issues.apache.org/jira/browse/LUCENE-4391 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Fix For: 4.0 I think all methods but {{getPostingsFormatForField}} should be made final so that users can't create a Codec that redefines any of the formats of Lucene40 by subclassing (since the codec name can't be overriden by subclassing, Lucene will fail at loading segments that use such codecs). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4392) Remove SortedVIntList
Adrien Grand created LUCENE-4392: Summary: Remove SortedVIntList Key: LUCENE-4392 URL: https://issues.apache.org/jira/browse/LUCENE-4392 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Minor Fix For: 4.0 It looks like SortedVIntList only referenced by its test case, maybe we should just remove it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4393) Move RollingCharBuffer to lucene-analysis-common
Adrien Grand created LUCENE-4393: Summary: Move RollingCharBuffer to lucene-analysis-common Key: LUCENE-4393 URL: https://issues.apache.org/jira/browse/LUCENE-4393 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Minor It looks like RollingCharBuffer is only used by analyzers. Maybe it would make sense to move it to lucene-analysis-common? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4394) Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default
Adrien Grand created LUCENE-4394: Summary: Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default Key: LUCENE-4394 URL: https://issues.apache.org/jira/browse/LUCENE-4394 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Trivial When loading BloomFilteringPostingsFormat from the SPI registry, it will have a null delegate by default. I think it would be more comfortable to make it wrap Lucene40PostingsFormat by default so that it can be used against lucene-core tests (ant -Dtests.postingsFormat=BloomFilter, currently fails with java.lang.UnsupportedOperationException: Error - org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been constructed without a choice of PostingsFormat) or with Solr field types with minimal configuration (fieldType ... postingsFormat=BloomFilter /). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4394) Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default
[ https://issues.apache.org/jira/browse/LUCENE-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456362#comment-13456362 ] Uwe Schindler commented on LUCENE-4394: --- How does Pulsing handle this? We should maybe make this similar for both codecs? Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default Key: LUCENE-4394 URL: https://issues.apache.org/jira/browse/LUCENE-4394 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Trivial When loading BloomFilteringPostingsFormat from the SPI registry, it will have a null delegate by default. I think it would be more comfortable to make it wrap Lucene40PostingsFormat by default so that it can be used against lucene-core tests (ant -Dtests.postingsFormat=BloomFilter, currently fails with java.lang.UnsupportedOperationException: Error - org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been constructed without a choice of PostingsFormat) or with Solr field types with minimal configuration (fieldType ... postingsFormat=BloomFilter /). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4391) Lucene40Codec methods should be final
[ https://issues.apache.org/jira/browse/LUCENE-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456361#comment-13456361 ] Uwe Schindler commented on LUCENE-4391: --- +1, I was thinking aout a similar thing, too! If somebody wants to create another codec, reusing *some* Lucene40 stuff, he should subclass Codec directly and delegate the needed things to Codec.forName(Lucene40).fooBar() The same should be done for Lucene40 PostingsFormat! Lucene40Codec methods should be final - Key: LUCENE-4391 URL: https://issues.apache.org/jira/browse/LUCENE-4391 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Fix For: 4.0 I think all methods but {{getPostingsFormatForField}} should be made final so that users can't create a Codec that redefines any of the formats of Lucene40 by subclassing (since the codec name can't be overriden by subclassing, Lucene will fail at loading segments that use such codecs). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jrockit-jdk1.6.0_33-R28.2.4-4.1.0) - Build # 1131 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1131/ Java: 32bit/jrockit-jdk1.6.0_33-R28.2.4-4.1.0 -XnoOpt All tests passed Build Log: [...truncated 20517 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1642 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for
[jira] [Commented] (LUCENE-4394) Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default
[ https://issues.apache.org/jira/browse/LUCENE-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456367#comment-13456367 ] Adrien Grand commented on LUCENE-4394: -- Pulsing has a non-registered impl whose constructor forces to specify the delegate postings format and a registered Pulsing40PostingsFormat that delegates to Lucene40PostingsFormat. So we could unregister BloomFilterPostingsFormat, remove its default constructor and have a BloomFilter40PostingsFormat subclass that would delegate to Lucene40PostingsFormat? Make BloomFilteringPostingsFormat wrap Lucene40PostingsFormat by default Key: LUCENE-4394 URL: https://issues.apache.org/jira/browse/LUCENE-4394 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Trivial When loading BloomFilteringPostingsFormat from the SPI registry, it will have a null delegate by default. I think it would be more comfortable to make it wrap Lucene40PostingsFormat by default so that it can be used against lucene-core tests (ant -Dtests.postingsFormat=BloomFilter, currently fails with java.lang.UnsupportedOperationException: Error - org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been constructed without a choice of PostingsFormat) or with Solr field types with minimal configuration (fieldType ... postingsFormat=BloomFilter /). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3843) Add lucene-codecs to Solr libs?
Adrien Grand created SOLR-3843: -- Summary: Add lucene-codecs to Solr libs? Key: SOLR-3843 URL: https://issues.apache.org/jira/browse/SOLR-3843 Project: Solr Issue Type: Wish Reporter: Adrien Grand Priority: Minor Solr gives the ability to its users to select the postings format to use on a per-field basis but only Lucene40PostingsFormat is available by default (unless users add lucene-codecs to the Solr lib directory). Maybe we should add lucene-codecs to Solr libs (I mean in the WAR file) so that people can try our non-default postings formats with minimum effort? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.6.0_35) - Build # 1132 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1132/ Java: 32bit/jdk1.6.0_35 -server -XX:+UseParallelGC All tests passed Build Log: [...truncated 20517 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1642 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 644 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/644/ All tests passed Build Log: [...truncated 20366 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1608 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 401 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for package org.apache.solr.handler.clustering.carrot2...
[jira] [Commented] (LUCENE-4393) Move RollingCharBuffer to lucene-analysis-common
[ https://issues.apache.org/jira/browse/LUCENE-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456381#comment-13456381 ] Michael McCandless commented on LUCENE-4393: +1 Move RollingCharBuffer to lucene-analysis-common Key: LUCENE-4393 URL: https://issues.apache.org/jira/browse/LUCENE-4393 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Minor It looks like RollingCharBuffer is only used by analyzers. Maybe it would make sense to move it to lucene-analysis-common? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4392) Remove SortedVIntList
[ https://issues.apache.org/jira/browse/LUCENE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456380#comment-13456380 ] Michael McCandless commented on LUCENE-4392: +1 Remove SortedVIntList - Key: LUCENE-4392 URL: https://issues.apache.org/jira/browse/LUCENE-4392 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Priority: Minor Fix For: 4.0 It looks like SortedVIntList only referenced by its test case, maybe we should just remove it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4391) Lucene40Codec methods should be final
[ https://issues.apache.org/jira/browse/LUCENE-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456383#comment-13456383 ] Michael McCandless commented on LUCENE-4391: +1 Lucene40Codec methods should be final - Key: LUCENE-4391 URL: https://issues.apache.org/jira/browse/LUCENE-4391 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Fix For: 4.0 I think all methods but {{getPostingsFormatForField}} should be made final so that users can't create a Codec that redefines any of the formats of Lucene40 by subclassing (since the codec name can't be overriden by subclassing, Lucene will fail at loading segments that use such codecs). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4384) add top-level 'ant precommit'
[ https://issues.apache.org/jira/browse/LUCENE-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-4384. - Resolution: Fixed Fix Version/s: 4.0 5.0 add top-level 'ant precommit' - Key: LUCENE-4384 URL: https://issues.apache.org/jira/browse/LUCENE-4384 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-4384.patch, LUCENE-4384.patch We have a lot more checks in the build: If we added 'ant precommit' it would make it easier to run all the (reasonably fast) checks beforehand... and save some typing. So I think we can just add precommit which is: * check-svn-working-copy (looks for un-added files, wrong eol-style) * validate (nocommit/author checks, license headers, 3rd party licenses, forbidden apis) * javadocs-lint (javadocs + missing/broken links checks) * test -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4390) fail the build on outdated .sha1 files
[ https://issues.apache.org/jira/browse/LUCENE-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-4390. - Resolution: Fixed Fix Version/s: 4.0 5.0 fail the build on outdated .sha1 files -- Key: LUCENE-4390 URL: https://issues.apache.org/jira/browse/LUCENE-4390 Project: Lucene - Core Issue Type: Bug Components: general/build Reporter: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-4390.patch branch_4x (only) currently has the following outdated/leftover .sha1s from obselete versions: {noformat} $ svn status ! solr/licenses/junit4-ant-2.0.0.rc5.jar.sha1 ! solr/licenses/randomizedtesting-runner-2.0.0.rc5.jar.sha1 {noformat} This is easy enough to implement. just run 'ant jar-checksums' before our svn status check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?
[ https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456403#comment-13456403 ] Uwe Schindler commented on SOLR-3843: - -1, they should simply put them into $solr_home/lib where all other plugins are. We donÄt want to bloat the WAR file. Solr has support for Lucene's SPI loaded from SolrResourceLoader. Add lucene-codecs to Solr libs? --- Key: SOLR-3843 URL: https://issues.apache.org/jira/browse/SOLR-3843 Project: Solr Issue Type: Wish Reporter: Adrien Grand Priority: Minor Solr gives the ability to its users to select the postings format to use on a per-field basis but only Lucene40PostingsFormat is available by default (unless users add lucene-codecs to the Solr lib directory). Maybe we should add lucene-codecs to Solr libs (I mean in the WAR file) so that people can try our non-default postings formats with minimum effort? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?
[ https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456404#comment-13456404 ] Uwe Schindler commented on SOLR-3843: - Just to add: If somebody wants to try out codecs, he will be for sure able to add the JAR file to his solr_home. We should maybe only add this to a wiki page. Add lucene-codecs to Solr libs? --- Key: SOLR-3843 URL: https://issues.apache.org/jira/browse/SOLR-3843 Project: Solr Issue Type: Wish Reporter: Adrien Grand Priority: Minor Solr gives the ability to its users to select the postings format to use on a per-field basis but only Lucene40PostingsFormat is available by default (unless users add lucene-codecs to the Solr lib directory). Maybe we should add lucene-codecs to Solr libs (I mean in the WAR file) so that people can try our non-default postings formats with minimum effort? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4389) Fix TwoDoubles dateline support
[ https://issues.apache.org/jira/browse/LUCENE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456405#comment-13456405 ] Chris Male commented on LUCENE-4389: I have faith in your knowledge on this and there seems to be adequate testing, so lets go ahead and commit that. Fix TwoDoubles dateline support --- Key: LUCENE-4389 URL: https://issues.apache.org/jira/browse/LUCENE-4389 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: David Smiley Assignee: David Smiley Fix For: 4.0 Attachments: LUCENE-4389_Support_dateline_and_circles_for_TwoDoubles.patch, LUCENE-4389 Support dateline for TwoDoubles.patch The dateline support can easily be fixed. After this, the TwoDoublesStrategy might not be particularly useful but at least it won't be buggy if you stay with Rectangle query shapes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?
[ https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456406#comment-13456406 ] Robert Muir commented on SOLR-3843: --- Also I had to turn off per-field codec support by default anyway because Solr keeps the IndexWriter open across core reloads (SOLR-3610). Someone must turn it on explicitly by setting their codec factory to SchemaCodecFactory in solrconfig.xml (realizing there are tradeoffs). Same thing goes with Similarity. Analyzer was fixed by changing solr to always pass the newest Analyzer as a param add/updateDocument (so its not really set in the IWConfig), but the general problem still exists. Add lucene-codecs to Solr libs? --- Key: SOLR-3843 URL: https://issues.apache.org/jira/browse/SOLR-3843 Project: Solr Issue Type: Wish Reporter: Adrien Grand Priority: Minor Solr gives the ability to its users to select the postings format to use on a per-field basis but only Lucene40PostingsFormat is available by default (unless users add lucene-codecs to the Solr lib directory). Maybe we should add lucene-codecs to Solr libs (I mean in the WAR file) so that people can try our non-default postings formats with minimum effort? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-3720) OOM in TestBeiderMorseFilter.testRandom
[ https://issues.apache.org/jira/browse/LUCENE-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir reassigned LUCENE-3720: --- Assignee: Robert Muir OOM in TestBeiderMorseFilter.testRandom --- Key: LUCENE-3720 URL: https://issues.apache.org/jira/browse/LUCENE-3720 Project: Lucene - Core Issue Type: Test Components: modules/analysis Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Assignee: Robert Muir Fix For: 5.0, 4.0 This has been OOM'ing a lot... we should see why, its likely a real bug. ant test -Dtestcase=TestBeiderMorseFilter -Dtestmethod=testRandom -Dtests.seed=2e18f456e714be89:310bba5e8404100d:-3bd11277c22f4591 -Dtests.multiplier=3 -Dargs=-Dfile.encoding=ISO8859-1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3720) OOM in TestBeiderMorseFilter.testRandom
[ https://issues.apache.org/jira/browse/LUCENE-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-3720: Fix Version/s: 4.0 5.0 Commons-codec 1.7 has been released with fixes. I'm testing them out now. OOM in TestBeiderMorseFilter.testRandom --- Key: LUCENE-3720 URL: https://issues.apache.org/jira/browse/LUCENE-3720 Project: Lucene - Core Issue Type: Test Components: modules/analysis Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Assignee: Robert Muir Fix For: 5.0, 4.0 This has been OOM'ing a lot... we should see why, its likely a real bug. ant test -Dtestcase=TestBeiderMorseFilter -Dtestmethod=testRandom -Dtests.seed=2e18f456e714be89:310bba5e8404100d:-3bd11277c22f4591 -Dtests.multiplier=3 -Dargs=-Dfile.encoding=ISO8859-1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3720) OOM in TestBeiderMorseFilter.testRandom
[ https://issues.apache.org/jira/browse/LUCENE-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-3720: Attachment: LUCENE-3720.patch patch upgrading the jar, re-enabling the test, and removing the warnings. I ran the following 100 times from a shell script with no fails: ant test -Dtestcase=TestBeiderMorseFilter -Dtests.multiplier=3 -Dtests.nightly=true -Dtestmethod=testRandom OOM in TestBeiderMorseFilter.testRandom --- Key: LUCENE-3720 URL: https://issues.apache.org/jira/browse/LUCENE-3720 Project: Lucene - Core Issue Type: Test Components: modules/analysis Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Assignee: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-3720.patch This has been OOM'ing a lot... we should see why, its likely a real bug. ant test -Dtestcase=TestBeiderMorseFilter -Dtestmethod=testRandom -Dtests.seed=2e18f456e714be89:310bba5e8404100d:-3bd11277c22f4591 -Dtests.multiplier=3 -Dargs=-Dfile.encoding=ISO8859-1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3720) OOM in TestBeiderMorseFilter.testRandom
[ https://issues.apache.org/jira/browse/LUCENE-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456411#comment-13456411 ] Robert Muir commented on LUCENE-3720: - I looked, it seems they are still in the release process (its just that 1.7 is starting to appear on mirrors). we should wait until the official release announcement for the upgrade. OOM in TestBeiderMorseFilter.testRandom --- Key: LUCENE-3720 URL: https://issues.apache.org/jira/browse/LUCENE-3720 Project: Lucene - Core Issue Type: Test Components: modules/analysis Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Assignee: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-3720.patch This has been OOM'ing a lot... we should see why, its likely a real bug. ant test -Dtestcase=TestBeiderMorseFilter -Dtestmethod=testRandom -Dtests.seed=2e18f456e714be89:310bba5e8404100d:-3bd11277c22f4591 -Dtests.multiplier=3 -Dargs=-Dfile.encoding=ISO8859-1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4345) Create a Classification module
[ https://issues.apache.org/jira/browse/LUCENE-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456415#comment-13456415 ] Robert Muir commented on LUCENE-4345: - I don't think this should be using payloads to pull POS tags: the purpose of payloads is when you need something stored in the actual index (and should be limited to e.g. a single byte), its not type-safe but application-specific. Instead such taggers should expose a type-safe PartOfSpeechAttribute as suggested in the o.a.l.analysis package javadocs. If they want to put POS into the index for e.g. payload-based queries, thats a separate concern, they should have a separate tokenfilter that encodes the POS attribute into the payload so this is optional (as it has tradeoffs in the index). See TypeAsPayloadFilter etc as an example of what I mean. But for this module we don't need anything in the index. If we think its useful for classifiers to limit the analysis to certain POS categories, then instead we should factor out a *minimal* POSAttribute sub-interface with something very generic like isNominal()/isVerbal() that can actually be implemented by different taggers with different tag sets across different languages. Then things like kuromoji's POSAttribute, openNLP's POSAttribute, or even your custom home-grown one, or some commercial one could extend this sub-interface and plug into it. At least i think this is possible with our attributes API :) Create a Classification module -- Key: LUCENE-4345 URL: https://issues.apache.org/jira/browse/LUCENE-4345 Project: Lucene - Core Issue Type: New Feature Reporter: Tommaso Teofili Assignee: Tommaso Teofili Priority: Minor Attachments: LUCENE-4345_2.patch, LUCENE-4345.patch, SOLR-3700_2.patch, SOLR-3700.patch Lucene/Solr can host huge sets of documents containing lots of information in fields so that these can be used as training examples (w/ features) in order to very quickly create classifiers algorithms to use on new documents and / or to provide an additional service. So the idea is to create a contrib module (called 'classification') to host a ClassificationComponent that will use already seen data (the indexed documents / fields) to classify new documents / text fragments. The first version will contain a (simplistic) Lucene based Naive Bayes classifier but more implementations should be added in the future. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4345) Create a Classification module
[ https://issues.apache.org/jira/browse/LUCENE-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456418#comment-13456418 ] Robert Muir commented on LUCENE-4345: - another simpler idea, you just handle this yourself in the Analyzer you pass to the thing. This is currently how Kuromoji works, it has a POS-based stopfilter. these are trivial to write. Create a Classification module -- Key: LUCENE-4345 URL: https://issues.apache.org/jira/browse/LUCENE-4345 Project: Lucene - Core Issue Type: New Feature Reporter: Tommaso Teofili Assignee: Tommaso Teofili Priority: Minor Attachments: LUCENE-4345_2.patch, LUCENE-4345.patch, SOLR-3700_2.patch, SOLR-3700.patch Lucene/Solr can host huge sets of documents containing lots of information in fields so that these can be used as training examples (w/ features) in order to very quickly create classifiers algorithms to use on new documents and / or to provide an additional service. So the idea is to create a contrib module (called 'classification') to host a ClassificationComponent that will use already seen data (the indexed documents / fields) to classify new documents / text fragments. The first version will contain a (simplistic) Lucene based Naive Bayes classifier but more implementations should be added in the future. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4366) Small speedups for BooleanScorer
[ https://issues.apache.org/jira/browse/LUCENE-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-4366: --- Attachment: LUCENE-4366.patch New patch, removing the specialization entirely and instead just trying to make things simpler (no more FakeScorer, no more BucketTable, int[] instead of linked list). I tried a number of variations but they all seem to be slower than trunk ... so for now I don't plan on pursuing this anymore ... just putting the patch up for reference. I also added a test case to verify passing max != Integer.MAX_VALUE works (the code is otherwise untested today) ... I'll commit that part. Small speedups for BooleanScorer Key: LUCENE-4366 URL: https://issues.apache.org/jira/browse/LUCENE-4366 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Attachments: LUCENE-4366.patch, LUCENE-4366.patch, LUCENE-4366.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4395) BooleanScorer/2 should not call .score() on MUST_NOT clauses
Michael McCandless created LUCENE-4395: -- Summary: BooleanScorer/2 should not call .score() on MUST_NOT clauses Key: LUCENE-4395 URL: https://issues.apache.org/jira/browse/LUCENE-4395 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless In working on LUCENE-4366 I realized we are doing this today (and then Robert reminded me BS2 is also doing it), which is really quite silly. Seems like a fast fix would be to wrap such clauses in ConstantScoreQuery ... but further improvements are possible with BooleanScorer: * Don't add the bucket to the valid list * If the current clause is not prohibited, but this document was already marked prohibited from a previous clause, then do not call score (BS2 could do this as well) I don't think I'll have near-term time to dig on this so feel free to take it if you are inspired! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses
Michael McCandless created LUCENE-4396: -- Summary: BooleanScorer should sometimes be used for MUST clauses Key: LUCENE-4396 URL: https://issues.apache.org/jira/browse/LUCENE-4396 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT. If there is one or more MUST clauses we always use BooleanScorer2. But I suspect that unless the MUST clauses have very low hit count compared to the other clauses, that BooleanScorer would perform better than BooleanScorer2. BooleanScorer still has some vestiges from when it used to handle MUST so it shouldn't be hard to bring back this capability ... I think the challenging part might be the heuristics on when to use which (likely we would have to use firstDocID as proxy for total hit count). Likely we should also have BooleanScorer sometimes use .advance() on the subs in this case, eg if suddenly the MUST clause skips 100 docs then you want to .advance() all the SHOULD clauses. I won't have near term time to work on this so feel free to take it if you are inspired! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 785 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/785/ Java: 32bit/jdk1.7.0_07 -client -XX:+UseParallelGC All tests passed Build Log: [...truncated 11909 lines...] BUILD FAILED C:\Jenkins\workspace\Lucene-Solr-trunk-Windows\build.xml:87: The following files contain @author tags or nocommits: * lucene/spatial/src/test/org/apache/lucene/spatial/DistanceStrategyTest.java Total time: 46 minutes 35 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseParallelGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4208) Spatial distance relevancy should use score of 1/distance
[ https://issues.apache.org/jira/browse/LUCENE-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-4208. -- Resolution: Fixed Chris thinks its good and I made the commit. trunk: r1385074 + 89, and 4x: r1385122 Spatial distance relevancy should use score of 1/distance - Key: LUCENE-4208 URL: https://issues.apache.org/jira/browse/LUCENE-4208 Project: Lucene - Core Issue Type: New Feature Components: modules/spatial Reporter: David Smiley Assignee: David Smiley Fix For: 4.0 Attachments: LUCENE-4208_makeQuery_return_ConstantScoreQuery_and_remake_TwoDoublesStrategy.patch, LUCENE-4208_makeQuery_return_ConstantScoreQuery,_standardize_makeDistanceValueSource_behav.patch, LUCENE-4208_makeQuery_return_ConstantScoreQuery,_standardize_makeDistanceValueSource_behav.patch The SpatialStrategy.makeQuery() at the moment uses the distance as the score (although some strategies -- TwoDoubles if I recall might not do anything which would be a bug). The distance is a poor value to use as the score because the score should be related to relevancy, and the distance itself is inversely related to that. A score of 1/distance would be nice. Another alternative is earthCircumference/2 - distance, although I like 1/distance better. Maybe use a different constant than 1. Credit: this is Chris Male's idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-3842: --- Attachment: LUCENE-3842.patch Sudarshan those changes look great! You now also record the input for every Path coming back from intersectPrefixPaths, and pass that to addStartPaths. Sorry it took so long to have a look! I started from your patch, got up to trunk again (there was one compilation error I think), added some comments, downgraded a nocommit. I think we are close here ... I'll make a branch so we can iterate. Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456443#comment-13456443 ] Michael McCandless commented on LUCENE-3842: OK I created branch: https://svn.apache.org/repos/asf/lucene/dev/branches/lucene3842 Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4389) Fix TwoDoubles dateline support
[ https://issues.apache.org/jira/browse/LUCENE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-4389. -- Resolution: Fixed Committed to trunk r1385130 and 4x r1385131 Fix TwoDoubles dateline support --- Key: LUCENE-4389 URL: https://issues.apache.org/jira/browse/LUCENE-4389 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: David Smiley Assignee: David Smiley Fix For: 4.0 Attachments: LUCENE-4389_Support_dateline_and_circles_for_TwoDoubles.patch, LUCENE-4389 Support dateline for TwoDoubles.patch The dateline support can easily be fixed. After this, the TwoDoublesStrategy might not be particularly useful but at least it won't be buggy if you stay with Rectangle query shapes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3304) Add Solr support for the new Lucene spatial module
[ https://issues.apache.org/jira/browse/SOLR-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456448#comment-13456448 ] David Smiley commented on SOLR-3304: The two issues this depends on are finally closed. I plan on committing this one tomorrow to allow more time for feedback. Add Solr support for the new Lucene spatial module -- Key: SOLR-3304 URL: https://issues.apache.org/jira/browse/SOLR-3304 Project: Solr Issue Type: New Feature Affects Versions: 4.0-ALPHA Reporter: Bill Bell Assignee: David Smiley Priority: Critical Labels: spatial Fix For: 4.0 Attachments: SOLR-3304_Solr_fields_for_Lucene_spatial_module (fieldName in Strategy) - indexableFields.patch, SOLR-3304_Solr_fields_for_Lucene_spatial_module (fieldName in Strategy).patch, SOLR-3304_Solr_fields_for_Lucene_spatial_module.patch, SOLR-3304_Solr_fields_for_Lucene_spatial_module.patch, SOLR-3304_Solr_fields_for_Lucene_spatial_module.patch, SOLR-3304_Solr_fields_for_Lucene_spatial_module.patch, SOLR-3304-strategy-getter-fixed.patch Get the Solr spatial module integrated with the lucene spatial module. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3844) SolrCore reload can fail because it tries to remove the index write lock while already holding it.
Mark Miller created SOLR-3844: - Summary: SolrCore reload can fail because it tries to remove the index write lock while already holding it. Key: SOLR-3844 URL: https://issues.apache.org/jira/browse/SOLR-3844 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Priority: Critical Fix For: 4.0, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4384) add top-level 'ant precommit'
[ https://issues.apache.org/jira/browse/LUCENE-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456449#comment-13456449 ] David Smiley commented on LUCENE-4384: -- Awesome; thanks guys! This should reduce some failures in my commits lately :-P add top-level 'ant precommit' - Key: LUCENE-4384 URL: https://issues.apache.org/jira/browse/LUCENE-4384 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-4384.patch, LUCENE-4384.patch We have a lot more checks in the build: If we added 'ant precommit' it would make it easier to run all the (reasonably fast) checks beforehand... and save some typing. So I think we can just add precommit which is: * check-svn-working-copy (looks for un-added files, wrong eol-style) * validate (nocommit/author checks, license headers, 3rd party licenses, forbidden apis) * javadocs-lint (javadocs + missing/broken links checks) * test -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3844) SolrCore reload can fail because it tries to remove the index write lock while already holding it.
[ https://issues.apache.org/jira/browse/SOLR-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3844. --- Resolution: Fixed Committed a fix, and fixed the test that should have caught this. SolrCore reload can fail because it tries to remove the index write lock while already holding it. -- Key: SOLR-3844 URL: https://issues.apache.org/jira/browse/SOLR-3844 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Priority: Critical Fix For: 4.0, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3831) atomic updates do not distribute correctly to other nodes
[ https://issues.apache.org/jira/browse/SOLR-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3831. --- Resolution: Fixed Fix committed - thanks Jim! atomic updates do not distribute correctly to other nodes - Key: SOLR-3831 URL: https://issues.apache.org/jira/browse/SOLR-3831 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0-BETA Environment: linux Reporter: Jim Musil Assignee: Mark Miller Priority: Blocker Fix For: 4.0, 5.0 Attachments: SOLR-3831.patch After setting up two independent solr nodes using the SolrCloud tutorial, atomic updates to a field of type payloads gives an error when updating the destination node. The error is: SEVERE: java.lang.NumberFormatException: For input string: 100} The input sent to the first node is in the expected default format for a payload field (eg foo|100) and that update succeeds. I've found that the update always works for the first node, but never the second. I've tested each server running independently and found that this update works as expected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3154) SolrJ CloudServer should be leader and network aware when adding docs
[ https://issues.apache.org/jira/browse/SOLR-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3154: -- Fix Version/s: (was: 4.0) SolrJ CloudServer should be leader and network aware when adding docs - Key: SOLR-3154 URL: https://issues.apache.org/jira/browse/SOLR-3154 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Grant Ingersoll Assignee: Mark Miller Priority: Minor Fix For: 5.0 Attachments: SOLR-3154.patch It would be good when indexing if the SolrJ CloudServer was leader aware so that we could avoid doing an extra hop for the data. It would also be good if one could easily set things up based on data locality principles. This might mean that CloudServer is aware of where on the network it is and would pick leaders that are as close as possible (i.e. local, perhaps.) This would come in to play when working with tools like Hadoop or other grid computing frameworks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3645) /terms should become 4.x distrib compatible or default to distrib=false
[ https://issues.apache.org/jira/browse/SOLR-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456457#comment-13456457 ] Mark Miller commented on SOLR-3645: --- For 4.0 I simply plan to add distrib=false as a default param in the /terms request handler. /terms should become 4.x distrib compatible or default to distrib=false --- Key: SOLR-3645 URL: https://issues.apache.org/jira/browse/SOLR-3645 Project: Solr Issue Type: Improvement Components: SearchComponents - other Affects Versions: 4.0-ALPHA Environment: SolrCloud, RHEL 5.4 Reporter: Nick Cotton Assignee: Mark Miller Priority: Minor Labels: feature Fix For: 4.0, 5.0 In a SolrCloud configuration, /terms does not return any terms when issued as follows: http://hostname:8983/solr/terms?terms.fl=nameterms=trueterms.limit=-1isSh ard=trueterms.sort=indexterms.prefix=s but does return reasonable results when distrib is turned off like so http://hostname:8983/solr/terms?terms.fl=nameterms=truedistrib=falseterms .limit=-1isShard=trueterms.sort=indexterms.prefix=s -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3465) Replication Causes Two Searcher Warmups
[ https://issues.apache.org/jira/browse/SOLR-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456458#comment-13456458 ] Mark Miller commented on SOLR-3465: --- The commit should not need to open a new searcher - I'll change this. Replication Causes Two Searcher Warmups Key: SOLR-3465 URL: https://issues.apache.org/jira/browse/SOLR-3465 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 4.0-ALPHA Reporter: Michael Garski Assignee: Mark Miller Fix For: 4.0, 5.0 I'm doing some testing with the current trunk, and am seeing that when a slave retrieves index updates from the master the warmup searcher registration is performed twice. Here is a snippet of the log that demonstrates this: May 16, 2012 6:02:02 PM org.apache.solr.handler.SnapPuller fetchLatestIndex INFO: Total time taken for download : 92 secs May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy onInit INFO: SolrDeletionPolicy.onInit: commits:num=2 commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_10,generation=36,filenames=[_45_0.tim, _45.fdt, segments_10, _45_0.tip, _45.fdx, _45.fnm, _45_0.frq, _45.per, _45_0.prx] May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy updateCommits INFO: newest commit = 36 May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@559fe5e6 main May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@42101da9 main May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} I am trying to determine the cause, does anyone have any idea where to start? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3465) Replication Causes Two Searcher Warmups
[ https://issues.apache.org/jira/browse/SOLR-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3465. --- Resolution: Fixed Fix committed. Replication Causes Two Searcher Warmups Key: SOLR-3465 URL: https://issues.apache.org/jira/browse/SOLR-3465 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 4.0-ALPHA Reporter: Michael Garski Assignee: Mark Miller Fix For: 4.0, 5.0 I'm doing some testing with the current trunk, and am seeing that when a slave retrieves index updates from the master the warmup searcher registration is performed twice. Here is a snippet of the log that demonstrates this: May 16, 2012 6:02:02 PM org.apache.solr.handler.SnapPuller fetchLatestIndex INFO: Total time taken for download : 92 secs May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy onInit INFO: SolrDeletionPolicy.onInit: commits:num=2 commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_10,generation=36,filenames=[_45_0.tim, _45.fdt, segments_10, _45_0.tip, _45.fdx, _45.fnm, _45_0.frq, _45.per, _45_0.prx] May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy updateCommits INFO: newest commit = 36 May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@559fe5e6 main May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@42101da9 main May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} I am trying to determine the cause, does anyone have any idea where to start? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3561) Error during deletion of shard/core
[ https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3561: -- Fix Version/s: (was: 4.0) 5.0 4.1 Error during deletion of shard/core --- Key: SOLR-3561 URL: https://issues.apache.org/jira/browse/SOLR-3561 Project: Solr Issue Type: Bug Components: multicore, replication (java), SolrCloud Affects Versions: 4.0-ALPHA Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012 Reporter: Per Steffensen Assignee: Mark Miller Fix For: 4.1, 5.0 Running several Solr servers in Cloud-cluster (zkHost set on the Solr servers). Several collections with several slices and one replica for each slice (each slice has two shards) Basically we want let our system delete an entire collection. We do this by trying to delete each and every shard under the collection. Each shard is deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant Solr {code} CoreAdminRequest request = new CoreAdminRequest(); request.setAction(CoreAdminAction.UNLOAD); request.setCoreName(shardName); CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl)); {code} The delete/unload succeeds, but in like 10% of the cases we get errors on involved Solr servers, right around the time where shard/cores are deleted, and we end up in a situation where ZK still claims (forever) that the deleted shard is still present and active. Form here the issue is easilier explained by a more concrete example: - 7 Solr servers involved - Several collection a.o. one called collection_2012_04, consisting of 28 slices, 56 shards (remember 1 replica for each slice) named collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2} - Each Solr server running 8 shards, e.g Solr server #1 is running shard collection_2012_04_slice1_shard1 and Solr server #7 is running shard collection_2012_04_slice1_shard2 belonging to the same slice slice1. When we decide to delete the collection collection_2012_04 we go through all 56 shards and delete/unload them one-by-one - including collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At some point during or shortly after all this deletion we see the following exceptions in solr.log on Solr server #7 {code} Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: core not found:collection_2012_04_slice1_shard1 request: http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206) Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log SEVERE: Recovery failed - trying again... Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process WARNING: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.RangeCheck(ArrayList.java:547) at java.util.ArrayList.get(ArrayList.java:322) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96) at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57) at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507) Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process {code} Im not sure exactly how to interpret this, but it seems to me that some recovery job tries to recover collection_2012_04_slice1_shard2 on Solr server #7 from collection_2012_04_slice1_shard1 on Solr server #1, but fail because Solr server #1
[jira] [Commented] (SOLR-2592) Pluggable shard lookup mechanism for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456462#comment-13456462 ] Noble Paul commented on SOLR-2592: -- Why should we have a composite id ? Why not we just configure the shard.key as a different field from the uniqueKey and pass the value of the field (shard.key.val) for add/delete? Pluggable shard lookup mechanism for SolrCloud -- Key: SOLR-2592 URL: https://issues.apache.org/jira/browse/SOLR-2592 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Noble Paul Assignee: Mark Miller Attachments: dbq_fix.patch, pluggable_sharding.patch, pluggable_sharding_V2.patch, SOLR-2592.patch, SOLR-2592_r1373086.patch, SOLR-2592_r1384367.patch, SOLR-2592_rev_2.patch, SOLR_2592_solr_4_0_0_BETA_ShardPartitioner.patch If the data in a cloud can be partitioned on some criteria (say range, hash, attribute value etc) It will be easy to narrow down the search to a smaller subset of shards and in effect can achieve more efficient search. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3645) /terms should become 4.x distrib compatible or default to distrib=false
[ https://issues.apache.org/jira/browse/SOLR-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3645. --- Resolution: Fixed Committed the change - if we want /terms to be distrib compat, we should make a new issue. /terms should become 4.x distrib compatible or default to distrib=false --- Key: SOLR-3645 URL: https://issues.apache.org/jira/browse/SOLR-3645 Project: Solr Issue Type: Improvement Components: SearchComponents - other Affects Versions: 4.0-ALPHA Environment: SolrCloud, RHEL 5.4 Reporter: Nick Cotton Assignee: Mark Miller Priority: Minor Labels: feature Fix For: 4.0, 5.0 In a SolrCloud configuration, /terms does not return any terms when issued as follows: http://hostname:8983/solr/terms?terms.fl=nameterms=trueterms.limit=-1isSh ard=trueterms.sort=indexterms.prefix=s but does return reasonable results when distrib is turned off like so http://hostname:8983/solr/terms?terms.fl=nameterms=truedistrib=falseterms .limit=-1isShard=trueterms.sort=indexterms.prefix=s -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3845) Rename numReplicas to replicationFactor in Collections API.
Mark Miller created SOLR-3845: - Summary: Rename numReplicas to replicationFactor in Collections API. Key: SOLR-3845 URL: https://issues.apache.org/jira/browse/SOLR-3845 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Priority: Minor Fix For: 4.0, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456468#comment-13456468 ] Robert Muir commented on LUCENE-3842: - Thanks for resurrecting this from the dead! I had forgotten just how fun this issue was :) Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-3842: --- Attachment: LUCENE-3842.patch Patch, addressing a few nocommits and getting PRESERVE_HOLES and PRESERVE_SEPS working. I did this by adding options to AnalyzingCompletionLookup, and then post-processing the returned automaton from TokenStreamToAutomaton. I also added a couple new nocommits Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3749) Default syncLevel cannot be configured by solrconfig.xml for updateLog(transaction log)
[ https://issues.apache.org/jira/browse/SOLR-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3749. --- Resolution: Fixed Fix Version/s: 5.0 Committed - thanks Raintung Li! Default syncLevel cannot be configured by solrconfig.xml for updateLog(transaction log) --- Key: SOLR-3749 URL: https://issues.apache.org/jira/browse/SOLR-3749 Project: Solr Issue Type: Improvement Components: update Affects Versions: 4.0 Environment: Solr cloud Reporter: Raintung Li Assignee: Mark Miller Labels: log, syncLevel,, transaction, updateLog Fix For: 4.0, 5.0 Attachments: configpatch, patch.txt Original Estimate: 24h Remaining Estimate: 24h In solr 4.0 environment, transaction log had been defined in three level, none/flush/fsync. The updateLog hard code the default sync level is SyncLevel.FLUSH. If user want to use the other levels, have to rewrite the RunUpdateProcess, to set the level. At best, user can set it in the solrconfig.xml, that it is easy to control and use. BTW, transaction log is very important for solr cloud, at best, invoke the sync to make sure kernel memory submit into the disk to avoid some corner case that maybe damage transaction log. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3721) Multiple concurrent recoveries of same shard?
[ https://issues.apache.org/jira/browse/SOLR-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3721: -- Affects Version/s: (was: 4.0) Fix Version/s: (was: 4.0) 5.0 4.1 Multiple concurrent recoveries of same shard? - Key: SOLR-3721 URL: https://issues.apache.org/jira/browse/SOLR-3721 Project: Solr Issue Type: Bug Components: multicore, SolrCloud Environment: Using our own Solr release based on Apache revision 1355667 from 4.x branch. Our changes to the Solr version is our solutions to TLT-3178 etc., and should have no effect on this issue. Reporter: Per Steffensen Assignee: Mark Miller Labels: concurrency, multicore, recovery, solrcloud Fix For: 4.1, 5.0 Attachments: recovery_in_progress.png, recovery_start_finish.log We run a performance/endurance test on a 7 Solr instance SolrCloud setup and eventually Solrs lose ZK connections and go into recovery. BTW the recovery often does not ever succeed, but we are looking into that. While doing that I noticed that, according to logs, multiple recoveries are in progress at the same time for the same shard. That cannot be intended and I can certainly imagine that it will cause some problems. It is just the logs that are wrong, did I make some mistake, or is this a real bug? See attached grep from log, grepping only on Finished recovery and Starting recovery logs. {code} grep -B 1 Finished recovery\|Starting recovery solr9.log solr8.log solr7.log solr6.log solr5.log solr4.log solr3.log solr2.log solr1.log solr0.log recovery_start_finish.log {code} It can be hard to get an overview of the log, but I have generated a graph showing (based alone on Started recovery and Finished recovery logs) how many recoveries are in progress at any time for the different shards. See attached recovery_in_progress.png. The graph is also a little hard to get an overview of (due to the many shards) but it is clear that for several shards there are multiple recoveries going on at the same time, and that several recoveries never succeed. Regards, Per Steffensen -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3845) Rename numReplicas to replicationFactor in Collections API.
[ https://issues.apache.org/jira/browse/SOLR-3845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3845. --- Resolution: Fixed Committed. Rename numReplicas to replicationFactor in Collections API. --- Key: SOLR-3845 URL: https://issues.apache.org/jira/browse/SOLR-3845 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Priority: Minor Fix For: 4.0, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3173) Database semantics - insert and update
[ https://issues.apache.org/jira/browse/SOLR-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3173: -- Fix Version/s: (was: 4.0) 5.0 4.1 Database semantics - insert and update -- Key: SOLR-3173 URL: https://issues.apache.org/jira/browse/SOLR-3173 Project: Solr Issue Type: New Feature Components: update Affects Versions: 3.5 Environment: All Reporter: Per Steffensen Assignee: Per Steffensen Labels: RDBMS, insert, nosql, uniqueKey, update Fix For: 4.1, 5.0 Original Estimate: 168h Remaining Estimate: 168h In order increase the ability of Solr to be used as a NoSql database (lots of concurrent inserts, updates, deletes and queries in the entire lifetime of the index) instead of just a search index (first: everything indexed (in one thread), after: only queries), I would like Solr to support the following features inspired by RDBMSs and other NoSql databases. * Given a solr-core with a schema containing a uniqueKey-field uniqueField and a document Dold, when trying to INSERT a new document Dnew where Dold.uniqueField is equal to Dnew.uniqueField, then I want a DocumentAlredyExists error. If no such document Dold exists I want Dnew indexed into the solr-core. * Given a solr-core with a schema containing a uniqueKey-field uniqueField and a document Dold, when trying to UPDATE a document Dnew where Dold.uniqueField is equal to Dnew.uniqueField I want Dold deleted from and Dnew added to the index (just as it is today).If no such document Dold exists I want nothing to happen (Dnew is not added to the index) The essence of this issue is to be able to state your intent (insert or update) and have slightly different semantics (from each other and the existing update) depending on you intent. The functionality provided by this issue is only really meaningfull when you run with updateLog activated. This issue might be solved more or less at the same time as SOLR-3178, and only one single SVN patch might be given to cover both issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456473#comment-13456473 ] Mark Miller commented on SOLR-3488: --- SOLR-3845 : Rename numReplicas to replicationFactor in Collections API. Create a Collections API for SolrCloud -- Key: SOLR-3488 URL: https://issues.apache.org/jira/browse/SOLR-3488 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3488_2.patch, SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456475#comment-13456475 ] Mark Miller commented on SOLR-3488: --- {quote} First, I am more in favor of the approach where all config and config changes are done against ZK. I do not like the idea of having to start a solr node in order to define a new collection or change various configs. All initial config as well as config changes should be possible to check in to source control and roll out to a cluster as files without starting and stopping live nodes (perhaps except ZK itself). {quote} I'm not sure I follow that...why would you need to start a solr node to deal with collections? The collection manager is designed the same way as core admin - you should not need a core... Create a Collections API for SolrCloud -- Key: SOLR-3488 URL: https://issues.apache.org/jira/browse/SOLR-3488 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3488_2.patch, SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-3842: --- Attachment: LUCENE-3842.patch New patch, removing preserve holes option from AnalyzingCompletionLookup: you can simply tell your StopFilter whether or not holes are meaningful. Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3842) Analyzing Suggester
[ https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456477#comment-13456477 ] Robert Muir commented on LUCENE-3842: - +1 for that. lets keep this as simple as possible and leave the responsibility to the analyzer as much as possible. My main concern for the PRESERVE_SEPS was for the japanese use case: we don't much care what the actual tokenization of the japanese words was, only the concatenated reading string. If the tokenization is a little off but the concatenation of all the readings is still correct, then we are ok. So it makes it more robust against tokenization differences, especially considering its partial inputs going into this thing (not whole words) Analyzing Suggester --- Key: LUCENE-3842 URL: https://issues.apache.org/jira/browse/LUCENE-3842 Project: Lucene - Core Issue Type: New Feature Components: modules/spellchecker Affects Versions: 3.6, 4.0-ALPHA Reporter: Robert Muir Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch Since we added shortest-path wFSA search in LUCENE-3714, and generified the comparator in LUCENE-3801, I think we should look at implementing suggesters that have more capabilities than just basic prefix matching. In particular I think the most flexible approach is to integrate with Analyzer at both build and query time, such that we build a wFST with: input: analyzed text such as ghost0christmas0past -- byte 0 here is an optional token separator output: surface form such as the ghost of christmas past weight: the weight of the suggestion we make an FST with PairOutputsweight,output, but only do the shortest path operation on the weight side (like the test in LUCENE-3801), at the same time accumulating the output (surface form), which will be the actual suggestion. This allows a lot of flexibility: * Using even standardanalyzer means you can offer suggestions that ignore stopwords, e.g. if you type in ghost of chr..., it will suggest the ghost of christmas past * we can add support for synonyms/wdf/etc at both index and query time (there are tradeoffs here, and this is not implemented!) * this is a basis for more complicated suggesters such as Japanese suggesters, where the analyzed form is in fact the reading, so we would add a TokenFilter that copies ReadingAttribute into term text to support that... * other general things like offering suggestions that are more fuzzy like using a plural stemmer or ignoring accents or whatever. According to my benchmarks, suggestions are still very fast with the prototype (e.g. ~ 100,000 QPS), and the FST size does not explode (its short of twice that of a regular wFST, but this is still far smaller than TST or JaSpell, etc). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b51) - Build # 1139 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1139/ Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseSerialGC All tests passed Build Log: [...truncated 21180 lines...] -check-forbidden-java-apis: [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/commons-io.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/executors.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk-deprecated.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk.txt [forbidden-apis] Loading classes to check... [forbidden-apis] Scanning for API signatures and dependencies... [forbidden-apis] Forbidden method invocation: java.lang.String#toUpperCase() [forbidden-apis] in org.apache.solr.update.UpdateLog$SyncLevel (UpdateLog.java:71) [forbidden-apis] Scanned 1870 (and 615 related) class file(s) for forbidden API invocations (in 1.45s), 1 error(s). BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:215: Check for forbidden API calls failed, see log. Total time: 30 minutes 43 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseSerialGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-2592) Pluggable shard lookup mechanism for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456482#comment-13456482 ] Michael Garski commented on SOLR-2592: -- That was my initial thought as well, however if the value that is hashed on to determine shard membership cannot be extracted from the unique id, the realtime get handler would have to broadcast the query to all shards in the collection. Perhaps the shard.keys parameter could be added to the realtime get handler, but that seems to be counter to the handler's purpose. Pluggable shard lookup mechanism for SolrCloud -- Key: SOLR-2592 URL: https://issues.apache.org/jira/browse/SOLR-2592 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Noble Paul Assignee: Mark Miller Attachments: dbq_fix.patch, pluggable_sharding.patch, pluggable_sharding_V2.patch, SOLR-2592.patch, SOLR-2592_r1373086.patch, SOLR-2592_r1384367.patch, SOLR-2592_rev_2.patch, SOLR_2592_solr_4_0_0_BETA_ShardPartitioner.patch If the data in a cloud can be partitioned on some criteria (say range, hash, attribute value etc) It will be easy to narrow down the search to a smaller subset of shards and in effect can achieve more efficient search. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b51) - Build # 1126 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1126/ Java: 32bit/jdk1.8.0-ea-b51 -client -XX:+UseG1GC All tests passed Build Log: [...truncated 21141 lines...] -check-forbidden-java-apis: [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/commons-io.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/executors.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/jdk-deprecated.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/jdk.txt [forbidden-apis] Loading classes to check... [forbidden-apis] Scanning for API signatures and dependencies... [forbidden-apis] Forbidden method invocation: java.lang.String#toUpperCase() [forbidden-apis] in org.apache.solr.update.UpdateLog$SyncLevel (UpdateLog.java:71) [forbidden-apis] Scanned 1872 (and 615 related) class file(s) for forbidden API invocations (in 1.46s), 1 error(s). BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:215: Check for forbidden API calls failed, see log. Total time: 30 minutes 35 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 32bit/jdk1.8.0-ea-b51 -client -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 788 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/788/ Java: 32bit/jdk1.7.0_07 -server -XX:+UseG1GC All tests passed Build Log: [...truncated 21144 lines...] -check-forbidden-java-apis: [forbidden-apis] Reading API signatures: C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\lucene\tools\forbiddenApis\commons-io.txt [forbidden-apis] Reading API signatures: C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\lucene\tools\forbiddenApis\executors.txt [forbidden-apis] Reading API signatures: C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\lucene\tools\forbiddenApis\jdk-deprecated.txt [forbidden-apis] Reading API signatures: C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\lucene\tools\forbiddenApis\jdk.txt [forbidden-apis] Loading classes to check... [forbidden-apis] Scanning for API signatures and dependencies... [forbidden-apis] Forbidden method invocation: java.lang.String#toUpperCase() [forbidden-apis] in org.apache.solr.update.UpdateLog$SyncLevel (UpdateLog.java:71) [forbidden-apis] Scanned 1872 (and 614 related) class file(s) for forbidden API invocations (in 2.65s), 1 error(s). BUILD FAILED C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\build.xml:67: The following error occurred while executing this line: C:\Jenkins\workspace\Lucene-Solr-4.x-Windows\solr\build.xml:215: Check for forbidden API calls failed, see log. Total time: 45 minutes 34 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b51) - Build # 1140 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1140/ Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseG1GC All tests passed Build Log: [...truncated 21233 lines...] -check-forbidden-java-apis: [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/commons-io.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/executors.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk-deprecated.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk.txt [forbidden-apis] Loading classes to check... [forbidden-apis] Scanning for API signatures and dependencies... [forbidden-apis] Forbidden method invocation: java.lang.String#toUpperCase() [forbidden-apis] in org.apache.solr.update.UpdateLog$SyncLevel (UpdateLog.java:71) [forbidden-apis] Scanned 1870 (and 615 related) class file(s) for forbidden API invocations (in 1.18s), 1 error(s). BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:215: Check for forbidden API calls failed, see log. Total time: 27 minutes 45 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b51) - Build # 1139 - Still Failing!
Forbidden API call - home in a bit and will fix. Sent from my iPhone On Sep 15, 2012, at 3:40 PM, Policeman Jenkins Server jenk...@sd-datasolutions.de wrote: Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1139/ Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseSerialGC All tests passed Build Log: [...truncated 21180 lines...] -check-forbidden-java-apis: [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/commons-io.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/executors.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk-deprecated.txt [forbidden-apis] Reading API signatures: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/jdk.txt [forbidden-apis] Loading classes to check... [forbidden-apis] Scanning for API signatures and dependencies... [forbidden-apis] Forbidden method invocation: java.lang.String#toUpperCase() [forbidden-apis] in org.apache.solr.update.UpdateLog$SyncLevel (UpdateLog.java:71) [forbidden-apis] Scanned 1870 (and 615 related) class file(s) for forbidden API invocations (in 1.45s), 1 error(s). BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:215: Check for forbidden API calls failed, see log. Total time: 30 minutes 43 seconds Build step 'Invoke Ant' marked build as failure Recording test results Description set: Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseSerialGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_07) - Build # 789 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/789/ Java: 64bit/jdk1.7.0_07 -XX:+UseSerialGC All tests passed Build Log: [...truncated 21158 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1614 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for package
[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456498#comment-13456498 ] Jan Høydahl commented on SOLR-3488: --- bq. I'm not sure I follow that...why would you need to start a solr node to deal with collections? The collection manager is designed the same way as core admin - you should not need a core... So how to create a new collection offline? Push an updated solr.xml to ZK? Create a Collections API for SolrCloud -- Key: SOLR-3488 URL: https://issues.apache.org/jira/browse/SOLR-3488 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3488_2.patch, SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_07) - Build # 1142 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1142/ Java: 32bit/jdk1.7.0_07 -server -XX:+UseG1GC All tests passed Build Log: [...truncated 20864 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1642 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for package
[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456508#comment-13456508 ] Mark Miller commented on SOLR-3488: --- Yes, if you want to predefine a collection you do it in solr.xml, the same way that collection1 is done. Otherwise, you can start Solr with no collections and create them with the API. Create a Collections API for SolrCloud -- Key: SOLR-3488 URL: https://issues.apache.org/jira/browse/SOLR-3488 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3488_2.patch, SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3733) better organization of javadocs in release
[ https://issues.apache.org/jira/browse/SOLR-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-3733: Attachment: SOLR-3733.patch Häckidy-Hick-Hack-Pätch: - linter of course does not pass - links between contribs/core/... not working/enabled - the documentation directory is structured like in Lucene, all javadocs go to top-level build/docs folder - index.html is created there automatically by xsl including links to tutorial (more will come later) - tutorial.html moved from doc-file up into site/html folder (as it does not relate to javadocs at all, it's just a plain old HTML page - There are still some doc-files folders in core which are completely misplaced (and dont work). Seem to be dead docs. - Javadocs-all nuked I will need Robert's help to get inter-module links working. better organization of javadocs in release -- Key: SOLR-3733 URL: https://issues.apache.org/jira/browse/SOLR-3733 Project: Solr Issue Type: Task Components: Build Reporter: Robert Muir Attachments: SOLR-3733.patch Just an issue to try to improve the stuff mentioned in SOLR-3690. Currently there is one directory api/ which is 'all javadocs' then solrj javadocs are duplicated again under api/solrj now also test-framework is underneath. But this isnt very navigable, and these additional folders (duplicate or not) are hidden behind the index.html on the website. we can improve this and probably generate a landing page from the build.xml descriptions etc like Lucene. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_07) - Build # 791 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/791/ Java: 64bit/jdk1.7.0_07 -XX:+UseSerialGC All tests passed Build Log: [...truncated 20779 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1615 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for package
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_07) - Build # 1145 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1145/ Java: 32bit/jdk1.7.0_07 -client -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 20878 lines...] -jenkins-javadocs-lint: javadocs-lint: [...truncated 1642 lines...] javadocs-lint: [exec] [exec] Crawl/parse... [exec] [exec] Verify... [...truncated 400 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.solr... [javadoc] Loading source files for package org.apache.solr.analysis... [javadoc] Loading source files for package org.apache.solr.client.solrj.embedded... [javadoc] Loading source files for package org.apache.solr.cloud... [javadoc] Loading source files for package org.apache.solr.common... [javadoc] Loading source files for package org.apache.solr.core... [javadoc] Loading source files for package org.apache.solr.handler... [javadoc] Loading source files for package org.apache.solr.handler.admin... [javadoc] Loading source files for package org.apache.solr.handler.component... [javadoc] Loading source files for package org.apache.solr.handler.loader... [javadoc] Loading source files for package org.apache.solr.highlight... [javadoc] Loading source files for package org.apache.solr.internal.csv... [javadoc] Loading source files for package org.apache.solr.internal.csv.writer... [javadoc] Loading source files for package org.apache.solr.logging... [javadoc] Loading source files for package org.apache.solr.logging.jul... [javadoc] Loading source files for package org.apache.solr.request... [javadoc] Loading source files for package org.apache.solr.response... [javadoc] Loading source files for package org.apache.solr.response.transform... [javadoc] Loading source files for package org.apache.solr.schema... [javadoc] Loading source files for package org.apache.solr.search... [javadoc] Loading source files for package org.apache.solr.search.function... [javadoc] Loading source files for package org.apache.solr.search.function.distance... [javadoc] Loading source files for package org.apache.solr.search.grouping... [javadoc] Loading source files for package org.apache.solr.search.grouping.collector... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.command... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.requestfactory... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.responseprocessor... [javadoc] Loading source files for package org.apache.solr.search.grouping.distributed.shardresultserializer... [javadoc] Loading source files for package org.apache.solr.search.grouping.endresulttransformer... [javadoc] Loading source files for package org.apache.solr.search.similarities... [javadoc] Loading source files for package org.apache.solr.servlet... [javadoc] Loading source files for package org.apache.solr.servlet.cache... [javadoc] Loading source files for package org.apache.solr.spelling... [javadoc] Loading source files for package org.apache.solr.spelling.suggest... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.fst... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.jaspell... [javadoc] Loading source files for package org.apache.solr.spelling.suggest.tst... [javadoc] Loading source files for package org.apache.solr.update... [javadoc] Loading source files for package org.apache.solr.update.processor... [javadoc] Loading source files for package org.apache.solr.util... [javadoc] Loading source files for package org.apache.solr.util.plugin... [javadoc] Loading source files for package org.apache.solr.util.xslt... [javadoc] Loading source files for package org.apache.noggit... [javadoc] Loading source files for package org.apache.solr.client.solrj... [javadoc] Loading source files for package org.apache.solr.client.solrj.beans... [javadoc] Loading source files for package org.apache.solr.client.solrj.impl... [javadoc] Loading source files for package org.apache.solr.client.solrj.request... [javadoc] Loading source files for package org.apache.solr.client.solrj.response... [javadoc] Loading source files for package org.apache.solr.client.solrj.util... [javadoc] Loading source files for package org.apache.solr.common.cloud... [javadoc] Loading source files for package org.apache.solr.common.luke... [javadoc] Loading source files for package org.apache.solr.common.params... [javadoc] Loading source files for package org.apache.solr.common.util... [javadoc] Loading source files for package org.apache.zookeeper... [javadoc] Loading source files for package org.apache.solr.handler.clustering... [javadoc] Loading source files for
[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 15 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/15/ No tests ran. Build Log: [...truncated 26158 lines...] prepare-release-no-sign: [mkdir] Created dir: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease [copy] Copying 396 files to /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene [copy] Copying 4 files to /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene/changes [get] Getting: http://people.apache.org/keys/group/lucene.asc [get] To: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene/KEYS [copy] Copying 189 files to /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/solr [copy] Copying 1 file to /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/solr [exec] [exec] Load release URL file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/... [exec] [exec] Test Lucene... [exec] test basics... [exec] get KEYS [exec] 0.1 MB [exec] check changes HTML... [exec] download lucene-5.0-src.tgz... [exec] 26.1 MB [exec] verify md5/sha1 digests [exec] download lucene-5.0.tgz... [exec] 46.9 MB [exec] verify md5/sha1 digests [exec] download lucene-5.0.zip... [exec] 56.1 MB [exec] verify md5/sha1 digests [exec] unpack lucene-5.0.tgz... [exec] Traceback (most recent call last): [exec] File dev-tools/scripts/smokeTestRelease.py, line 1105, in module [exec] File dev-tools/scripts/smokeTestRelease.py, line 1055, in main [exec] File dev-tools/scripts/smokeTestRelease.py, line 1090, in smokeTest [exec] File dev-tools/scripts/smokeTestRelease.py, line 412, in unpack [exec] File dev-tools/scripts/smokeTestRelease.py, line 454, in verifyUnpacked [exec] RuntimeError: lucene: classificationcodecs missing from artifact lucene-5.0.tgz BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:278: exec returned: 1 Total time: 13 minutes 51 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org