[jira] [Commented] (SOLR-3981) docBoost is compounded on copyField
[ https://issues.apache.org/jira/browse/SOLR-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483047#comment-13483047 ] Toke Eskildsen commented on SOLR-3981: -- Thank you for investigating this so quickly, Hoss. Applying the boosts once from all source fields for a given copyField destination seems a bit strange to me, but since it is old behaviour, I understand that it cannot be changed. > docBoost is compounded on copyField > --- > > Key: SOLR-3981 > URL: https://issues.apache.org/jira/browse/SOLR-3981 > Project: Solr > Issue Type: Bug >Affects Versions: 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1 > > Attachments: SOLR-3981.patch, SOLR-3981.patch > > > As noted by Toke in a comment on SOLR-3875... > https://issues.apache.org/jira/browse/SOLR-3875?focusedCommentId=13482233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13482233 > {quote} > While boosting of multi-value fields is handled correctly in Solr 4.0.0, > boosting for copyFields are not. A sample document: > {code} > > Insane score Example. Score = 10E9 > Document boost broken for copyFields > video ThomasEgense and Toke Eskildsen > Test > bug > something else > bug > bug > > {code} > The fields name, manu, cat, features, keywords and content gets copied to > text and a search for thomasegense matches the text-field with query > explanation > {code} > 70384.67 = (MATCH) weight(text:thomasegense in 0) [DefaultSimilarity], result > of: > 70384.67 = fieldWeight in 0, product of: > 1.0 = tf(freq=1.0), with freq of: > 1.0 = termFreq=1.0 > 0.30685282 = idf(docFreq=1, maxDocs=1) > 229376.0 = fieldNorm(doc=0) > {code} > If the two last fields keywords and content are removed from the sample > document, the score is reduced by a factor 100 (docBoost^2). > {quote} > (This is a continuation of some of the problems caused by the changes made > when the concept of docBoost was eliminated from the underly IndexWRiter > code, and overlooked due to the lack of testing of docBoosts at the solr > level - SOLR-3885)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4502) Highlighter does not highlight when using large exact phrase query
Nicolas Labrot created LUCENE-4502: -- Summary: Highlighter does not highlight when using large exact phrase query Key: LUCENE-4502 URL: https://issues.apache.org/jira/browse/LUCENE-4502 Project: Lucene - Core Issue Type: Bug Components: modules/highlighter Affects Versions: 4.0, 3.6 Reporter: Nicolas Labrot For example I have the text {noformat} The text which appears before and after a highlighted term when using the simple formatter This parameter accepts per-field overrides. {noformat} I want to highlight this text with the query {code:java} String query = "\"which appears before and after a highlighted term when using the simple formatter\"" {code} Using the EnglishAnalyzer it does not highlight. Using the WhitespaceAnalyzer it's highlight. If the query is smaller the hightlight is correct. I have try to track the issue, but it go to deeply into Lucene core at NearSpansUnordered -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3983) Test failure in SoftAutoCommitTest
Alan Woodward created SOLR-3983: --- Summary: Test failure in SoftAutoCommitTest Key: SOLR-3983 URL: https://issues.apache.org/jira/browse/SOLR-3983 Project: Solr Issue Type: Bug Components: update Affects Versions: 5.0 Reporter: Alan Woodward Priority: Minor [junit4:junit4] 2> NOTE: reproduce with: ant test -Dtestcase=SoftAutoCommitTest -Dtests.method=testSoftAndHardCommitMaxTimeDelete -Dtests.seed=170BD2F6138202CF -Dtests.slow=true -Dtests.locale=it -Dtests.timezone=America/Cancun -Dtests.file.encoding=ISO-8859-1 [junit4:junit4] FAILURE 11.1s | SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete <<< [junit4:junit4]> Throwable #1: java.lang.AssertionError: searcher529 wasn't soon enough after soft529: 1351065837489 !< 1351065837316 + 100 (fudge) [junit4:junit4]>at __randomizedtesting.SeedInfo.seed([170BD2F6138202CF:D0476A6B082ACF7F]:0) [junit4:junit4]>at org.junit.Assert.fail(Assert.java:93) [junit4:junit4]>at org.junit.Assert.assertTrue(Assert.java:43) [junit4:junit4]>at org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:256) 100% repeatable for me. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4502) Highlighter does not highlight when using large exact phrase query
[ https://issues.apache.org/jira/browse/LUCENE-4502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Labrot updated LUCENE-4502: --- Attachment: LUCENE-4502.zip I joined a maven project to reproduce the issue > Highlighter does not highlight when using large exact phrase query > --- > > Key: LUCENE-4502 > URL: https://issues.apache.org/jira/browse/LUCENE-4502 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 3.6, 4.0 >Reporter: Nicolas Labrot > Attachments: LUCENE-4502.zip > > > For example I have the text > {noformat} > The text which appears before and after a highlighted term when using the > simple formatter This parameter accepts per-field overrides. > {noformat} > I want to highlight this text with the query > {code:java} > String query = "\"which appears before and after a highlighted term when > using the simple formatter\"" > {code} > Using the EnglishAnalyzer it does not highlight. > Using the WhitespaceAnalyzer it's highlight. > If the query is smaller the hightlight is correct. > I have try to track the issue, but it go to deeply into Lucene core at > NearSpansUnordered -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-4502) Highlighter does not highlight when using large exact phrase query
[ https://issues.apache.org/jira/browse/LUCENE-4502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483070#comment-13483070 ] Nicolas Labrot edited comment on LUCENE-4502 at 10/24/12 8:09 AM: -- I joined a maven project which reproduce the issue was (Author: nithril): I joined a maven project to reproduce the issue > Highlighter does not highlight when using large exact phrase query > --- > > Key: LUCENE-4502 > URL: https://issues.apache.org/jira/browse/LUCENE-4502 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 3.6, 4.0 >Reporter: Nicolas Labrot > Attachments: LUCENE-4502.zip > > > For example I have the text > {noformat} > The text which appears before and after a highlighted term when using the > simple formatter This parameter accepts per-field overrides. > {noformat} > I want to highlight this text with the query > {code:java} > String query = "\"which appears before and after a highlighted term when > using the simple formatter\"" > {code} > Using the EnglishAnalyzer it does not highlight. > Using the WhitespaceAnalyzer it's highlight. > If the query is smaller the hightlight is correct. > I have try to track the issue, but it go to deeply into Lucene core at > NearSpansUnordered -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1284 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1284/ Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC All tests passed Build Log: [...truncated 24580 lines...] -documentation-lint: [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for missing docs... [exec] [exec] build/docs/classification\org\apache\lucene\classification/ClassificationResult.html [exec] missing Constructors: ClassificationResult(java.lang.String, double) [exec] missing Methods: getAssignedClass() [exec] missing Methods: getScore() [exec] [exec] build/docs/classification\org\apache\lucene\classification/KNearestNeighborClassifier.html [exec] missing Constructors: KNearestNeighborClassifier(int) [exec] [exec] Missing javadocs were found! BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:252: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1919: exec returned: 1 Total time: 52 minutes 55 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward reassigned SOLR-1972: --- Assignee: Alan Woodward > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, > SOLR-1972.patch, SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated SOLR-1972: Attachment: SOLR-1972_metrics.patch > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3984) Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true
Raintung Li created SOLR-3984: - Summary: Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true Key: SOLR-3984 URL: https://issues.apache.org/jira/browse/SOLR-3984 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0, 4.0-BETA, 4.0-ALPHA Reporter: Raintung Li Call URL : http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores Check the disk path: folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but caller response is ok. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3984) Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true
[ https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raintung Li updated SOLR-3984: -- Description: Call URL : http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores Check the disk path: folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but caller response is success. was: Call URL : http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores Check the disk path: folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but caller response is ok. > Unload the core, don't remove the core data from disk for parameter > deleteInstanceDir=true > -- > > Key: SOLR-3984 > URL: https://issues.apache.org/jira/browse/SOLR-3984 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 >Reporter: Raintung Li > Original Estimate: 168h > Remaining Estimate: 168h > > Call URL : > http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores > Check the disk path: > folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but > caller response is success. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483091#comment-13483091 ] Alan Woodward commented on SOLR-1972: - Updated patch, using this.toString() as the scope identifier. Your handlerCount solution wouldn't have been thread-safe, Shawn, but thanks for finding the right method to use! Also adds a test to check that different handlers have different statistics. [~otis] as Shawn says, this is just extending the existing mbeans, so it's already available through JMX. Metrics also exposes everything through JMX by default anyway, so you can get the stats either way. > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3984) Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true
[ https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raintung Li updated SOLR-3984: -- Attachment: patch.txt Fix the bug patch > Unload the core, don't remove the core data from disk for parameter > deleteInstanceDir=true > -- > > Key: SOLR-3984 > URL: https://issues.apache.org/jira/browse/SOLR-3984 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 >Reporter: Raintung Li > Attachments: patch.txt > > Original Estimate: 168h > Remaining Estimate: 168h > > Call URL : > http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores > Check the disk path: > folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but > caller response is success. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b58) - Build # 1962 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1962/ Java: 64bit/jdk1.8.0-ea-b58 -XX:+UseParallelGC All tests passed Build Log: [...truncated 22497 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javadoc] Loading source files for package org.apache.lucene... [javadoc] Loading source files for package org.apache.lucene.analysis... [javadoc] Loading source files for package org.apache.lucene.analysis.tokenattributes... [javadoc] Loading source files for package org.apache.lucene.codecs... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene40... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene40.values... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene41... [javadoc] Loading source files for package org.apache.lucene.codecs.perfield... [javadoc] Loading source files for package org.apache.lucene.document... [javadoc] Loading source files for package org.apache.lucene.index... [javadoc] Loading source files for package org.apache.lucene.search... [javadoc] Loading source files for package org.apache.lucene.search.payloads... [javadoc] Loading source files for package org.apache.lucene.search.similarities... [javadoc] Loading source files for package org.apache.lucene.search.spans... [javadoc] Loading source files for package org.apache.lucene.store... [javadoc] Loading source files for package org.apache.lucene.util... [javadoc] Loading source files for package org.apache.lucene.util.automaton... [javadoc] Loading source files for package org.apache.lucene.util.fst... [javadoc] Loading source files for package org.apache.lucene.util.mutable... [javadoc] Loading source files for package org.apache.lucene.util.packed... [javadoc] Constructing Javadoc information... [javadoc] Standard Doclet version 1.8.0-ea [javadoc] Building tree for all the packages and classes... [javadoc] Building index for all the packages and classes... [javadoc] Building index for all classes... [javadoc] Generating /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/docs/core/help-doc.html... [javadoc] 1 warning [...truncated 44 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javadoc] Loading source files for package org.apache.lucene.analysis.ar... [javadoc] Loading source files for package org.apache.lucene.analysis.bg... [javadoc] Loading source files for package org.apache.lucene.analysis.br... [javadoc] Loading source files for package org.apache.lucene.analysis.ca... [javadoc] Loading source files for package org.apache.lucene.analysis.charfilter... [javadoc] Loading source files for package org.apache.lucene.analysis.cjk... [javadoc] Loading source files for package org.apache.lucene.analysis.commongrams... [javadoc] Loading source files for package org.apache.lucene.analysis.compound... [javadoc] Loading source files for package org.apache.lucene.analysis.compound.hyphenation... [javadoc] Loading source files for package org.apache.lucene.analysis.core... [javadoc] Loading source files for package org.apache.lucene.analysis.cz... [javadoc] Loading source files for package org.apache.lucene.analysis.da... [javadoc] Loading source files for package org.apache.lucene.analysis.de... [javadoc] Loading source files for package org.apache.lucene.analysis.el... [javadoc] Loading source files for package org.apache.lucene.analysis.en... [javadoc] Loading source files for package org.apache.lucene.analysis.es... [javadoc] Loading source files for package org.apache.lucene.analysis.eu... [javadoc] Loading source files for package org.apache.lucene.analysis.fa... [javadoc] Loading source files for package org.apache.lucene.analysis.fi... [javadoc] Loading source files for package org.apache.lucene.analysis.fr... [javadoc] Loading source files for package org.apache.lucene.analysis.ga... [javadoc] Loading source files for package org.apache.lucene.analysis.gl... [javadoc] Loading source files for package org.apache.lucene.analysis.hi... [javadoc] Loading source files for package org.apache.lucene.analysis.hu... [javadoc] Loading source files for package org.apache.lucene.analysis.hunspell... [javadoc] Loading source files for package org.apache.lucene.analysis.hy... [javadoc] Loading source files for package org.apache.lucene.analysis.id... [javadoc] Loading source files for package org.apache.lucene.analysis.in... [javadoc] Loading source files for package org.apache.lucene.analysis.it... [javadoc] Loading source files for package org.apache.lucene.analysis.lv... [javadoc] Loading source files for package org.apache.lucene.analysis.miscellaneous... [javadoc] Loading sour
[jira] [Commented] (LUCENE-4494) Add phoenetic algorithm Match Rating approach to lucene
[ https://issues.apache.org/jira/browse/LUCENE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483108#comment-13483108 ] Colm Rice commented on LUCENE-4494: --- Hi Ryan, yes, your right of course. In short, if I knew how to do it I would, I'm still a bit of a newbie you see! I received some advice previously (not sure how good it was now) that indicated that the codecs are rarely touched, so I decided to add the patch to the Lucene solution and hope that someone with more experience that I, would do some handholding with me or relocate it. Je suis desole :-) > Add phoenetic algorithm Match Rating approach to lucene > --- > > Key: LUCENE-4494 > URL: https://issues.apache.org/jira/browse/LUCENE-4494 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 4.0-ALPHA >Reporter: Colm Rice >Priority: Minor > Fix For: 4.1 > > Attachments: LUCENE-4494.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > I want to add MatchRatingApproach algorithm to the Lucene project. > What I have at the moment is a class called > org.apache.lucene.analysis.phoenetic.MatchRatingApproach implementing > StringEncoder > I have a pretty comprehensive test file located at: > org.apache.lucene.analysis.phonetic.MatchRatingApproachTests > It's not exactly existing pattern so I'm going to need a bit of advice here. > Thanks! Feel free to email. > FYI: It my first contribitution so be gentle :-) C# is my native. > Reference: http://en.wikipedia.org/wiki/Match_rating_approach -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-java7 - Build # 3337 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3337/ All tests passed Build Log: [...truncated 24620 lines...] -documentation-lint: [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for missing docs... [exec] [exec] build/docs/classification/org/apache/lucene/classification/ClassificationResult.html [exec] missing Constructors: ClassificationResult(java.lang.String, double) [exec] missing Methods: getAssignedClass() [exec] missing Methods: getScore() [exec] [exec] build/docs/classification/org/apache/lucene/classification/KNearestNeighborClassifier.html [exec] missing Constructors: KNearestNeighborClassifier(int) [exec] [exec] Missing javadocs were found! BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/build.xml:60: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/build.xml:252: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/common-build.xml:1919: exec returned: 1 Total time: 45 minutes 27 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4345) Create a Classification module
[ https://issues.apache.org/jira/browse/LUCENE-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483117#comment-13483117 ] Michael McCandless commented on LUCENE-4345: The builds have been failing because some methods are missing javadocs: {noformat} -documentation-lint: [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for missing docs... [exec] [exec] build/docs/classification/org/apache/lucene/classification/ClassificationResult.html [exec] missing Constructors: ClassificationResult(java.lang.String, double) [exec] missing Methods: getAssignedClass() [exec] missing Methods: getScore() [exec] [exec] build/docs/classification/org/apache/lucene/classification/KNearestNeighborClassifier.html [exec] missing Constructors: KNearestNeighborClassifier(int) [exec] [exec] Missing javadocs were found! {noformat} > Create a Classification module > -- > > Key: LUCENE-4345 > URL: https://issues.apache.org/jira/browse/LUCENE-4345 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Minor > Attachments: LUCENE-4345_2.patch, LUCENE-4345.patch, > SOLR-3700_2.patch, SOLR-3700.patch > > > Lucene/Solr can host huge sets of documents containing lots of information in > fields so that these can be used as training examples (w/ features) in order > to very quickly create classifiers algorithms to use on new documents and / > or to provide an additional service. > So the idea is to create a contrib module (called 'classification') to host a > ClassificationComponent that will use already seen data (the indexed > documents / fields) to classify new documents / text fragments. > The first version will contain a (simplistic) Lucene based Naive Bayes > classifier but more implementations should be added in the future. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3985) Allow ExternalFileField caches to be reloaded on newSearcher and firstSearcher events
Alan Woodward created SOLR-3985: --- Summary: Allow ExternalFileField caches to be reloaded on newSearcher and firstSearcher events Key: SOLR-3985 URL: https://issues.apache.org/jira/browse/SOLR-3985 Project: Solr Issue Type: Improvement Components: Schema and Analysis Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Fix For: 4.1, 5.0 At the moment, ExternalFileField caches can only be refreshed/reloaded by clearing them entirely, which forces a reload the next time they are used in a query. If your external files are big, this can take unacceptably long. Instead, we should allow the caches to be loaded on newSearcher/firstSearcher events, running in the background. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3985) Allow ExternalFileField caches to be reloaded on newSearcher and firstSearcher events
[ https://issues.apache.org/jira/browse/SOLR-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated SOLR-3985: Attachment: SOLR-3985.patch Patch, implementing a listener. Also adds some more javadocs to ExternalFileField and FileFloatSource > Allow ExternalFileField caches to be reloaded on newSearcher and > firstSearcher events > - > > Key: SOLR-3985 > URL: https://issues.apache.org/jira/browse/SOLR-3985 > Project: Solr > Issue Type: Improvement > Components: Schema and Analysis >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1, 5.0 > > Attachments: SOLR-3985.patch > > > At the moment, ExternalFileField caches can only be refreshed/reloaded by > clearing them entirely, which forces a reload the next time they are used in > a query. If your external files are big, this can take unacceptably long. > Instead, we should allow the caches to be loaded on newSearcher/firstSearcher > events, running in the background. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483120#comment-13483120 ] Mark Miller commented on SOLR-3939: --- Okay, I've finally got all tests passing reliably for me. I had to add SocketException to the list of exceptions that are okay to consider a peer sync success. I'll try and get this committed tonight. > Solr Cloud recovery and leader election when unloading leader core > -- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-3538) Unloading a SolrCore object and specifying delete does not fully delete all Solr parts
[ https://issues.apache.org/jira/browse/SOLR-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-3538: Assignee: Erick Erickson > Unloading a SolrCore object and specifying delete does not fully delete all > Solr parts > -- > > Key: SOLR-3538 > URL: https://issues.apache.org/jira/browse/SOLR-3538 > Project: Solr > Issue Type: Bug > Components: multicore >Affects Versions: 4.0-ALPHA > Environment: Windows >Reporter: Andre' Hazelwood >Assignee: Erick Erickson >Priority: Minor > > If I issue a action=UNLOAD&delete=true request for a specific Solr Core on > the CoreAdminHandler, all files are removed except files located in the tlog > directory under the core. We are trying to manage our cores from an outside > system, so having the core not actually get deleted is a pain. > I would expect all files as well as the Core directory to be removed if the > delete parameter is specified. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-3984) Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true
[ https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-3984: Assignee: Erick Erickson > Unload the core, don't remove the core data from disk for parameter > deleteInstanceDir=true > -- > > Key: SOLR-3984 > URL: https://issues.apache.org/jira/browse/SOLR-3984 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 >Reporter: Raintung Li >Assignee: Erick Erickson > Attachments: patch.txt > > Original Estimate: 168h > Remaining Estimate: 168h > > Call URL : > http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores > Check the disk path: > folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but > caller response is success. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Created] (SOLR-3983) Test failure in SoftAutoCommitTest
Hmmm, works for me, Mac OS X, Lion. Fresh checkout just before testing. FWIW On Wed, Oct 24, 2012 at 4:08 AM, Alan Woodward (JIRA) wrote: > Alan Woodward created SOLR-3983: > --- > > Summary: Test failure in SoftAutoCommitTest > Key: SOLR-3983 > URL: https://issues.apache.org/jira/browse/SOLR-3983 > Project: Solr > Issue Type: Bug > Components: update > Affects Versions: 5.0 > Reporter: Alan Woodward > Priority: Minor > > > [junit4:junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=SoftAutoCommitTest > -Dtests.method=testSoftAndHardCommitMaxTimeDelete > -Dtests.seed=170BD2F6138202CF -Dtests.slow=true -Dtests.locale=it > -Dtests.timezone=America/Cancun -Dtests.file.encoding=ISO-8859-1 > [junit4:junit4] FAILURE 11.1s | > SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete <<< > [junit4:junit4]> Throwable #1: java.lang.AssertionError: searcher529 > wasn't soon enough after soft529: 1351065837489 !< 1351065837316 + 100 (fudge) > [junit4:junit4]>at > __randomizedtesting.SeedInfo.seed([170BD2F6138202CF:D0476A6B082ACF7F]:0) > [junit4:junit4]>at org.junit.Assert.fail(Assert.java:93) > [junit4:junit4]>at org.junit.Assert.assertTrue(Assert.java:43) > [junit4:junit4]>at > org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:256) > > 100% repeatable for me. > > -- > This message is automatically generated by JIRA. > If you think it was sent incorrectly, please contact your JIRA administrators > For more information on JIRA, see: http://www.atlassian.com/software/jira > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b58) - Build # 1965 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1965/ Java: 64bit/jdk1.8.0-ea-b58 -XX:+UseParallelGC All tests passed Build Log: [...truncated 22444 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javadoc] Loading source files for package org.apache.lucene... [javadoc] Loading source files for package org.apache.lucene.analysis... [javadoc] Loading source files for package org.apache.lucene.analysis.tokenattributes... [javadoc] Loading source files for package org.apache.lucene.codecs... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene40... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene40.values... [javadoc] Loading source files for package org.apache.lucene.codecs.lucene41... [javadoc] Loading source files for package org.apache.lucene.codecs.perfield... [javadoc] Loading source files for package org.apache.lucene.document... [javadoc] Loading source files for package org.apache.lucene.index... [javadoc] Loading source files for package org.apache.lucene.search... [javadoc] Loading source files for package org.apache.lucene.search.payloads... [javadoc] Loading source files for package org.apache.lucene.search.similarities... [javadoc] Loading source files for package org.apache.lucene.search.spans... [javadoc] Loading source files for package org.apache.lucene.store... [javadoc] Loading source files for package org.apache.lucene.util... [javadoc] Loading source files for package org.apache.lucene.util.automaton... [javadoc] Loading source files for package org.apache.lucene.util.fst... [javadoc] Loading source files for package org.apache.lucene.util.mutable... [javadoc] Loading source files for package org.apache.lucene.util.packed... [javadoc] Constructing Javadoc information... [javadoc] Standard Doclet version 1.8.0-ea [javadoc] Building tree for all the packages and classes... [javadoc] Building index for all the packages and classes... [javadoc] Building index for all classes... [javadoc] Generating /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/docs/core/help-doc.html... [javadoc] 1 warning [...truncated 44 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.lucene.analysis.ar... [javadoc] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javadoc] Loading source files for package org.apache.lucene.analysis.bg... [javadoc] Loading source files for package org.apache.lucene.analysis.br... [javadoc] Loading source files for package org.apache.lucene.analysis.ca... [javadoc] Loading source files for package org.apache.lucene.analysis.charfilter... [javadoc] Loading source files for package org.apache.lucene.analysis.cjk... [javadoc] Loading source files for package org.apache.lucene.analysis.commongrams... [javadoc] Loading source files for package org.apache.lucene.analysis.compound... [javadoc] Loading source files for package org.apache.lucene.analysis.compound.hyphenation... [javadoc] Loading source files for package org.apache.lucene.analysis.core... [javadoc] Loading source files for package org.apache.lucene.analysis.cz... [javadoc] Loading source files for package org.apache.lucene.analysis.da... [javadoc] Loading source files for package org.apache.lucene.analysis.de... [javadoc] Loading source files for package org.apache.lucene.analysis.el... [javadoc] Loading source files for package org.apache.lucene.analysis.en... [javadoc] Loading source files for package org.apache.lucene.analysis.es... [javadoc] Loading source files for package org.apache.lucene.analysis.eu... [javadoc] Loading source files for package org.apache.lucene.analysis.fa... [javadoc] Loading source files for package org.apache.lucene.analysis.fi... [javadoc] Loading source files for package org.apache.lucene.analysis.fr... [javadoc] Loading source files for package org.apache.lucene.analysis.ga... [javadoc] Loading source files for package org.apache.lucene.analysis.gl... [javadoc] Loading source files for package org.apache.lucene.analysis.hi... [javadoc] Loading source files for package org.apache.lucene.analysis.hu... [javadoc] Loading source files for package org.apache.lucene.analysis.hunspell... [javadoc] Loading source files for package org.apache.lucene.analysis.hy... [javadoc] Loading source files for package org.apache.lucene.analysis.id... [javadoc] Loading source files for package org.apache.lucene.analysis.in... [javadoc] Loading source files for package org.apache.lucene.analysis.it... [javadoc] Loading source files for package org.apache.lucene.analysis.lv... [javadoc] Loading source files for package org.apache.lucene.analysis.miscellaneous... [javadoc] Loading sour
[jira] [Commented] (SOLR-3583) Percentiles for facets, pivot facets, and distributed pivot facets
[ https://issues.apache.org/jira/browse/SOLR-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483218#comment-13483218 ] Monica Skidmore commented on SOLR-3583: --- I have internal customers at my company eager to use this feature; I'm excited that you're updating it for 4.0 and hoping it can be committed soon! > Percentiles for facets, pivot facets, and distributed pivot facets > -- > > Key: SOLR-3583 > URL: https://issues.apache.org/jira/browse/SOLR-3583 > Project: Solr > Issue Type: Improvement >Reporter: Chris Russell >Priority: Minor > Labels: newbie, patch > Fix For: 4.1 > > Attachments: SOLR-3583.patch > > > Built on top of SOLR-2894 (includes Apr 25th version) this patch adds > percentiles and averages to facets, pivot facets, and distributed pivot > facets by making use of range facet internals. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4345) Create a Classification module
[ https://issues.apache.org/jira/browse/LUCENE-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483234#comment-13483234 ] Tommaso Teofili commented on LUCENE-4345: - thanks Michael, it should be fixed now. > Create a Classification module > -- > > Key: LUCENE-4345 > URL: https://issues.apache.org/jira/browse/LUCENE-4345 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Minor > Attachments: LUCENE-4345_2.patch, LUCENE-4345.patch, > SOLR-3700_2.patch, SOLR-3700.patch > > > Lucene/Solr can host huge sets of documents containing lots of information in > fields so that these can be used as training examples (w/ features) in order > to very quickly create classifiers algorithms to use on new documents and / > or to provide an additional service. > So the idea is to create a contrib module (called 'classification') to host a > ClassificationComponent that will use already seen data (the indexed > documents / fields) to classify new documents / text fragments. > The first version will contain a (simplistic) Lucene based Naive Bayes > classifier but more implementations should be added in the future. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1287 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1287/ Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC All tests passed Build Log: [...truncated 24594 lines...] -documentation-lint: [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for missing docs... [exec] [exec] build/docs/classification\org\apache\lucene\classification/ClassificationResult.html [exec] missing Constructors: ClassificationResult(java.lang.String, double) [exec] missing Methods: getAssignedClass() [exec] missing Methods: getScore() [exec] [exec] build/docs/classification\org\apache\lucene\classification/KNearestNeighborClassifier.html [exec] missing Constructors: KNearestNeighborClassifier(int) [exec] [exec] Missing javadocs were found! BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:252: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1919: exec returned: 1 Total time: 62 minutes 14 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseParallelGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3986) index version and generation not changed in admin UI after delete by query on master
Bill Au created SOLR-3986: - Summary: index version and generation not changed in admin UI after delete by query on master Key: SOLR-3986 URL: https://issues.apache.org/jira/browse/SOLR-3986 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.0 Reporter: Bill Au Priority: Minor Here are the steps to reproduce this: - follow steps in Solr 4.0 tutorial to set up a master and a slave to use Java/HTTP replication - index example documents on master: java -jar post.jar *.xml - make a note of the index version and generation the on both the replication section of the summary screen of core collection1 and the replication screen on both the master and slave - run a delete by query on the master java -Ddata=args -jar post.jar "name:DDR" - on master reload the summary screen for core collection1. The Num Docs field decreased but the index version and generation are unchanged in the replication section. The index version and generation are also unchanged in the replication screen. - on the slave, wait for replication to kick in or trigger it manually. On the summary screen for core collection1, the Num DOcs field decreased to match what's on the master. The index version and generation of the master remain unchanged but the index version and generation of the slave both changed. The same goes for the index version and generation of the master and slave on the replication screen. The replication handler on the master does report changed index version and generation: localhost:8983/solr/collection1/replication?command=indexversion It is only the admin UI that reporting the older index version and generation on both the core summary screen and replication screen. This only happens with delete by query. There is no problem with delete with id or add. Both the index version and generation do get updated on subsequent delete by query but both remain one cycle behind on the master. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3986) index version and generation not changed in admin UI after delete by query on master
[ https://issues.apache.org/jira/browse/SOLR-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483273#comment-13483273 ] Bill Au commented on SOLR-3986: --- By the way, the request URL for the summary screen for core collection1: localhost:8983/solr/#/collection1 the request URL for the replication screen: localhost:8983/solr/#/collection1/replication > index version and generation not changed in admin UI after delete by query on > master > > > Key: SOLR-3986 > URL: https://issues.apache.org/jira/browse/SOLR-3986 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.0 >Reporter: Bill Au >Priority: Minor > > Here are the steps to reproduce this: > - follow steps in Solr 4.0 tutorial to set up a master and a slave to use > Java/HTTP replication > - index example documents on master: > java -jar post.jar *.xml > - make a note of the index version and generation the on both the replication > section of the summary screen of core collection1 and the replication screen > on both the master and slave > - run a delete by query on the master > java -Ddata=args -jar post.jar "name:DDR" > - on master reload the summary screen for core collection1. The Num Docs > field decreased but the index version and generation are unchanged in the > replication section. The index version and generation are also unchanged in > the replication screen. > - on the slave, wait for replication to kick in or trigger it manually. On > the summary screen for core collection1, the Num DOcs field decreased to > match what's on the master. The index version and generation of the master > remain unchanged but the index version and generation of the slave both > changed. The same goes for the index version and generation of the master > and slave on the replication screen. > The replication handler on the master does report changed index version and > generation: > localhost:8983/solr/collection1/replication?command=indexversion > It is only the admin UI that reporting the older index version and generation > on both the core summary screen and replication screen. > This only happens with delete by query. There is no problem with delete with > id or add. > Both the index version and generation do get updated on subsequent delete by > query but both remain one cycle behind on the master. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483277#comment-13483277 ] Shawn Heisey commented on SOLR-1972: I wanted to double-check that the new test would fail with the old constructor, so I installed your new patch and removed this.toString() from the parameter list on those statements. The updated RequestHandlerTests didn't fail like I'd hoped. I suspect that's because the terms handler and the update handler are different classes, so this.getClass() was apparently different. Before the fix, I did have different statistics in the update handler versus the search handlers. It was only handlers of the same type that were the same -- my four search handlers. I did however get a rather spectacular (and repeatable) failure in MBeansHandlerTest. That failure went away when I restored the constructor to the current patch state. When I make it into the office, I will do some additional testing to make sure I did everything right. > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3984) Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true
[ https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-3984: - Attachment: SOLR-3984.patch Raintung is correct. In fact, deleteInstanceDir seems to be completely broken unless you specify an absolute dir. Here's a reworked patch because it seems like the trap here (CoreContainer.getInstanceDir()) is just laying in wait for the unwary. I've added a new method getRawInstanceDir and refactored the uses of getInstanceDir to use the right one. I wouldn't dare return the new getInstanceDir to, say, the snapshooter code, and the code that writes solr.xml back out would also be broken if it used the (new) getInstanceDir(). If there are no objections, I'll commit this tomorrow. Thanks Raintung! Your patch provided me a _much_ easier time tackling this! > Unload the core, don't remove the core data from disk for parameter > deleteInstanceDir=true > -- > > Key: SOLR-3984 > URL: https://issues.apache.org/jira/browse/SOLR-3984 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 >Reporter: Raintung Li >Assignee: Erick Erickson > Attachments: patch.txt, SOLR-3984.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > Call URL : > http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores > Check the disk path: > folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but > caller response is success. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3984) Solr Admin Unload with deleteInstanceDir=true fails unless the path is absolute.
[ https://issues.apache.org/jira/browse/SOLR-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-3984: - Fix Version/s: 5.0 4.1 Summary: Solr Admin Unload with deleteInstanceDir=true fails unless the path is absolute. (was: Unload the core, don't remove the core data from disk for parameter deleteInstanceDir=true) > Solr Admin Unload with deleteInstanceDir=true fails unless the path is > absolute. > > > Key: SOLR-3984 > URL: https://issues.apache.org/jira/browse/SOLR-3984 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 >Reporter: Raintung Li >Assignee: Erick Erickson > Fix For: 4.1, 5.0 > > Attachments: patch.txt, SOLR-3984.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > Call URL : > http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=mycollection1&qt=/admin/cores > Check the disk path: > folder: /apache-solr-4.0.0/example3/solr/mycollection1 still exist, but > caller response is success. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3390) Highlighting issue with multi-word synonyms causes to highlight the wrong terms
[ https://issues.apache.org/jira/browse/SOLR-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483292#comment-13483292 ] Jonathan Cummins commented on SOLR-3390: I think you can fix it by using a "custom" synonym filter factory and without setting the "luceneMatchVersion" to "LUCENE_33" in the solrconfig.xml. You can just do something like: package your.package.name; public class CustomSynonymFilterFactory extends SynonymFilterFactory { @Override public void init(Map args){ this.setLuceneMatchVersion(Version.LUCENE_33); super.init(args); } } And then, in your schema, you can do something like this: And that will let it use the "SlowSynonymFilter" from solr 3.3 for just the synonyms without changing the luceneMatchVersion in solrconfig.xml. It works basically by "tricking" the SynonymFilterFactory class into thinking the lucene version is 3.3 without it actually being 3.3. Hope that helps out! > Highlighting issue with multi-word synonyms causes to highlight the wrong > terms > --- > > Key: SOLR-3390 > URL: https://issues.apache.org/jira/browse/SOLR-3390 > Project: Solr > Issue Type: Bug > Components: highlighter, query parsers >Affects Versions: 3.6 > Environment: Windows 7. (Development machine, not the server) >Reporter: Rahul Babulal > Labels: highlighter, multi-word, solr, synonyms > > I am using solr 3.6 and when I have multi-words synonyms the highlighting > results have the wrong word highlighted. > If I have the below entry in the synonyms file: > dns, domain name system > If I index something like: "A sample dns entry explaining the details". > Searching for "name" (without quotes) in the highlight results/snippets I get > : "A sample dns entry explaining the details". (The token "entry" > overlaps with the token "name" in the analysis.jsp) > Searching for "system" (without quotes) in the highlight results/snippets I > get : "A sample dns entry explaining the details". (The token > "explaining" overlaps with the token "system" in the analysis.jsp) > Here is my schema field Type: > positionIncrementGap="100"> > > > > ignoreCase="true" expand="true"/> > words="stopwords.txt" enablePositionIncrements="true" /> > > > > > > ignoreCase="true" expand="false"/> > words="stopwords.txt" enablePositionIncrements="true" /> > > > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3538) Unloading a SolrCore object and specifying delete does not fully delete all Solr parts
[ https://issues.apache.org/jira/browse/SOLR-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-3538. -- Resolution: Fixed Fix Version/s: 4.0-BETA Specifying deleteDataDir=true will also remove the tlog directory. > Unloading a SolrCore object and specifying delete does not fully delete all > Solr parts > -- > > Key: SOLR-3538 > URL: https://issues.apache.org/jira/browse/SOLR-3538 > Project: Solr > Issue Type: Bug > Components: multicore >Affects Versions: 4.0-ALPHA > Environment: Windows >Reporter: Andre' Hazelwood >Assignee: Erick Erickson >Priority: Minor > Fix For: 4.0-BETA > > > If I issue a action=UNLOAD&delete=true request for a specific Solr Core on > the CoreAdminHandler, all files are removed except files located in the tlog > directory under the core. We are trying to manage our cores from an outside > system, so having the core not actually get deleted is a pain. > I would expect all files as well as the Core directory to be removed if the > delete parameter is specified. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4503) MoreLikeThis supports multiple index readers.
Ying Andrews created LUCENE-4503: Summary: MoreLikeThis supports multiple index readers. Key: LUCENE-4503 URL: https://issues.apache.org/jira/browse/LUCENE-4503 Project: Lucene - Core Issue Type: Improvement Reporter: Ying Andrews Priority: Minor -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4503) MoreLikeThis supports multiple index readers.
[ https://issues.apache.org/jira/browse/LUCENE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ying Andrews updated LUCENE-4503: - Attachment: MoreLikeThis.java.patch Uploading the improved MoreLikeThis in the attached patch file. > MoreLikeThis supports multiple index readers. > - > > Key: LUCENE-4503 > URL: https://issues.apache.org/jira/browse/LUCENE-4503 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ying Andrews >Priority: Minor > Labels: patch > Attachments: MoreLikeThis.java.patch > > Original Estimate: 72h > Remaining Estimate: 72h > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4503) MoreLikeThis supports multiple index readers.
[ https://issues.apache.org/jira/browse/LUCENE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483322#comment-13483322 ] Ying Andrews commented on LUCENE-4503: -- * Added support for multiple index readers so More Like This can generate a similary query based on multiple indexes. * This extends the MoreLikeThis feature to work with lucene MultSsearcher. * * For example: * Due to large size we may want to divide all sales index into: sales_1, sale_2, sales_3, ..., sales_n. * In this case we would best use parallel multi-searcher to do the search. Old MoreLikeThis.java doesn't support * this scenario. If the current document of interest comes from index sales_1, then the query returned from * like(int) and like(Reader, String) will only be based on index sales_1, which apparently does not reflect the * entirety of the whole document population. * * Modified: * constructors - MoreLikeThis(IndexReader), * MoreLikeThis(IndexReader, Similarity) * private method - createQueue(Map) * * Added: * constructors - MoreLikeThis(IndexReader, IndexReader[]), * MoreLikeThis(IndexReader, IndexReader[], Similarity) * * Notes: * When invoking method like(int) of this class, you have to pass in the NORMALIZED document number. * You can use the same algorithm used in lucene MultiSearcher class, specifically seen in * subSearcher(int) and subDoc(int) methods. > MoreLikeThis supports multiple index readers. > - > > Key: LUCENE-4503 > URL: https://issues.apache.org/jira/browse/LUCENE-4503 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ying Andrews >Priority: Minor > Labels: patch > Attachments: MoreLikeThis.java.patch > > Original Estimate: 72h > Remaining Estimate: 72h > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-2593) A new core admin action 'split' for splitting index
[ https://issues.apache.org/jira/browse/SOLR-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483335#comment-13483335 ] Deepak Kumar commented on SOLR-2593: I have a situation which demands 2 core merging, re-create data partitions, split & install in 2(or more) cores, seems like this place has got somewhat things closer in that area, basically the case is that there are 2 cores on same schema roughly of 55G and 35G(and growing) each and data keeps on getting pushed continuously on 35G core, we can't allow it to get filled infinitely so essentially over a period of time(offline period/maintenance period) we regenrate(by re-indexing to a fresh core) both the cores with the desired set of data keyed on some unique key, discard the old oversized cores and install the fresh ones, re-indexing is a kind of pain and eventually it'll create the same set of documents but the older core will loose too older docs due to size constraint and the smaller core would be further shrinked as it'll probably be holding lesser documents due to docs getting shifted to bigger one, this can be considered as a sliding time window based core, so the basic steps in demand could be: 1.) Merge N cores to 1 big core(high cost). 2.) Scan through all the documents of the big core and create N(num of cores that were merged initially) new cores till allowed size by the side. 3.) Hot swap the main cores with the fresh ones. 4.) Discard the old cores probably after backing it up. Above 1 may be omitted if we can directly scan through documents of N cores and keep on pushing the new docs over to target cores. > A new core admin action 'split' for splitting index > --- > > Key: SOLR-2593 > URL: https://issues.apache.org/jira/browse/SOLR-2593 > Project: Solr > Issue Type: New Feature >Reporter: Noble Paul > Fix For: 4.1 > > > If an index is too large/hot it would be desirable to split it out to another > core . > This core may eventually be replicated out to another host. > There can be to be multiple strategies > * random split of x or x% > * fq="user:johndoe" > example : > action=split&split=20percent&newcore=my_new_index > or > action=split&fq=user:johndoe&newcore=john_doe_index -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4503) MoreLikeThis supports multiple index readers.
[ https://issues.apache.org/jira/browse/LUCENE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483340#comment-13483340 ] Robert Muir commented on LUCENE-4503: - Can't you just pass a MultiReader instead? > MoreLikeThis supports multiple index readers. > - > > Key: LUCENE-4503 > URL: https://issues.apache.org/jira/browse/LUCENE-4503 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ying Andrews >Priority: Minor > Labels: patch > Attachments: MoreLikeThis.java.patch > > Original Estimate: 72h > Remaining Estimate: 72h > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-2878: -- Attachment: LUCENE-2878.patch New patch, does a few things: - adds some Javadocs. Not many, though! This is mainly me trying to understand how things fit together here. - pulls the SnapshotPositionCollector into its own class, and extends OrderedConjunctionIntervalIterator to use it. Also adds a new test illustrating this. - cleans up Interval and IntervalIterator a bit. I'll commit this shortly. > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, > mentor > Fix For: Positions Branch > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, > PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3983) Test failure in SoftAutoCommitTest
[ https://issues.apache.org/jira/browse/SOLR-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483366#comment-13483366 ] Steven Rowe commented on SOLR-3983: --- I can't reproduce, with Apple JVM v1.6.0_37 and Oracle JVM v1.7.0_07 on OS X 10.8.2. > Test failure in SoftAutoCommitTest > -- > > Key: SOLR-3983 > URL: https://issues.apache.org/jira/browse/SOLR-3983 > Project: Solr > Issue Type: Bug > Components: update >Affects Versions: 5.0 >Reporter: Alan Woodward >Priority: Minor > > [junit4:junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=SoftAutoCommitTest > -Dtests.method=testSoftAndHardCommitMaxTimeDelete > -Dtests.seed=170BD2F6138202CF -Dtests.slow=true -Dtests.locale=it > -Dtests.timezone=America/Cancun -Dtests.file.encoding=ISO-8859-1 > [junit4:junit4] FAILURE 11.1s | > SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete <<< > [junit4:junit4]> Throwable #1: java.lang.AssertionError: searcher529 > wasn't soon enough after soft529: 1351065837489 !< 1351065837316 + 100 (fudge) > [junit4:junit4]> at > __randomizedtesting.SeedInfo.seed([170BD2F6138202CF:D0476A6B082ACF7F]:0) > [junit4:junit4]> at org.junit.Assert.fail(Assert.java:93) > [junit4:junit4]> at org.junit.Assert.assertTrue(Assert.java:43) > [junit4:junit4]> at > org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:256) > 100% repeatable for me. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3983) Test failure in SoftAutoCommitTest
[ https://issues.apache.org/jira/browse/SOLR-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483383#comment-13483383 ] Alan Woodward commented on SOLR-3983: - Odd. Erick couldn't reproduce it either. Still reproduces all the time for me, Apple JVM 1.6.0_37, OS X 10.7.5. I guess this means I have to debug it... :-) > Test failure in SoftAutoCommitTest > -- > > Key: SOLR-3983 > URL: https://issues.apache.org/jira/browse/SOLR-3983 > Project: Solr > Issue Type: Bug > Components: update >Affects Versions: 5.0 >Reporter: Alan Woodward >Priority: Minor > > [junit4:junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=SoftAutoCommitTest > -Dtests.method=testSoftAndHardCommitMaxTimeDelete > -Dtests.seed=170BD2F6138202CF -Dtests.slow=true -Dtests.locale=it > -Dtests.timezone=America/Cancun -Dtests.file.encoding=ISO-8859-1 > [junit4:junit4] FAILURE 11.1s | > SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete <<< > [junit4:junit4]> Throwable #1: java.lang.AssertionError: searcher529 > wasn't soon enough after soft529: 1351065837489 !< 1351065837316 + 100 (fudge) > [junit4:junit4]> at > __randomizedtesting.SeedInfo.seed([170BD2F6138202CF:D0476A6B082ACF7F]:0) > [junit4:junit4]> at org.junit.Assert.fail(Assert.java:93) > [junit4:junit4]> at org.junit.Assert.assertTrue(Assert.java:43) > [junit4:junit4]> at > org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:256) > 100% repeatable for me. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1401778 - /lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java
I don't like how this is currently a manual process, sure if you had used {@code here we would have detected it, but in general I think we should be validating this javadocs html? e.g. if i put this page http://lucene.apache.org/solr/4_0_0/solr-core/org/apache/solr/handler/PingRequestHandler.html into the w3 validator (http://validator.w3.org/) it complains: Line 191, Column 8: end tag for "CODE" omitted, but its declaration does not permit this Anyone know of a good way we can improve our checker for this? Then we would be able to keep it correct. On Wed, Oct 24, 2012 at 12:52 PM, wrote: > Author: rmuir > Date: Wed Oct 24 16:52:18 2012 > New Revision: 1401778 > > URL: http://svn.apache.org/viewvc?rev=1401778&view=rev > Log: > fix unclosed tag that makes the whole javadocs page have a huge font > > Modified: > > lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java > > Modified: > lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java > URL: > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java?rev=1401778&r1=1401777&r2=1401778&view=diff > == > --- > lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java > (original) > +++ > lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java > Wed Oct 24 16:52:18 2012 > @@ -119,7 +119,7 @@ import org.slf4j.LoggerFactory; > * > * http://.../ping?action=status > * - returns a status code indicating if the healthcheck file exists > - * ("enabled") or not ("disabled") > + * ("enabled") or not ("disabled") > * > * > * > > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4494) Add phoenetic algorithm Match Rating approach to lucene
[ https://issues.apache.org/jira/browse/LUCENE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483418#comment-13483418 ] Ryan McKinley commented on LUCENE-4494: --- Your patch looks good as is. With the tests and docs you have, I expect it will easily slide into commons codec. re 'codecs are rarely touched' I expect they do not *change* them often (ever?), but adding a new generally useful codec is what the project is for! Nothing in the patch relies on lucene -- I suggest making a patch just like this one and posting it here: https://issues.apache.org/jira/browse/CODEC Just take what you have and replace: {code}lucene/analysis/phonetic/src/test/org/apache/lucene/analysis/phonetic/{code} with {code}src/main/java/org/apache/commons/codec/language/{code} and you should be good. Let me know if you need help > Add phoenetic algorithm Match Rating approach to lucene > --- > > Key: LUCENE-4494 > URL: https://issues.apache.org/jira/browse/LUCENE-4494 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 4.0-ALPHA >Reporter: Colm Rice >Priority: Minor > Fix For: 4.1 > > Attachments: LUCENE-4494.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > I want to add MatchRatingApproach algorithm to the Lucene project. > What I have at the moment is a class called > org.apache.lucene.analysis.phoenetic.MatchRatingApproach implementing > StringEncoder > I have a pretty comprehensive test file located at: > org.apache.lucene.analysis.phonetic.MatchRatingApproachTests > It's not exactly existing pattern so I'm going to need a bit of advice here. > Thanks! Feel free to email. > FYI: It my first contribitution so be gentle :-) C# is my native. > Reference: http://en.wikipedia.org/wiki/Match_rating_approach -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1401778 - /lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java
I fixed the address bug from this validator. The charset one seems like a real bug in javadocs. If you use javadoc -charset like we do, it uses a html5 construct but the doctype is declared as html4. On Wed, Oct 24, 2012 at 1:01 PM, Robert Muir wrote: > I don't like how this is currently a manual process, sure if you had > used {@code here we would have detected it, but in general I think we > should be validating this javadocs html? > > e.g. if i put this page > http://lucene.apache.org/solr/4_0_0/solr-core/org/apache/solr/handler/PingRequestHandler.html > into the w3 validator (http://validator.w3.org/) it complains: > > Line 191, Column 8: end tag for "CODE" omitted, but its declaration > does not permit this > > Anyone know of a good way we can improve our checker for this? Then we > would be able to keep it correct. > > On Wed, Oct 24, 2012 at 12:52 PM, wrote: >> Author: rmuir >> Date: Wed Oct 24 16:52:18 2012 >> New Revision: 1401778 >> >> URL: http://svn.apache.org/viewvc?rev=1401778&view=rev >> Log: >> fix unclosed tag that makes the whole javadocs page have a huge font >> >> Modified: >> >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java >> >> Modified: >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java >> URL: >> http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java?rev=1401778&r1=1401777&r2=1401778&view=diff >> == >> --- >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java >> (original) >> +++ >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/PingRequestHandler.java >> Wed Oct 24 16:52:18 2012 >> @@ -119,7 +119,7 @@ import org.slf4j.LoggerFactory; >> * >> * http://.../ping?action=status >> * - returns a status code indicating if the healthcheck file exists >> - * ("enabled") or not ("disabled") >> + * ("enabled") or not ("disabled") >> * >> * >> * >> >> - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3971) A collection that is created with numShards=1 turns into a numShards=2 collection after starting up a second core and not specifying numShards.
[ https://issues.apache.org/jira/browse/SOLR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3971: -- Affects Version/s: (was: 5.0) Fix Version/s: 5.0 > A collection that is created with numShards=1 turns into a numShards=2 > collection after starting up a second core and not specifying numShards. > --- > > Key: SOLR-3971 > URL: https://issues.apache.org/jira/browse/SOLR-3971 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0 >Reporter: Mark Miller >Assignee: Mark Miller > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: SOLR-3971.patch > > > Showing up while I'm working on a different test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3939: -- Summary: An empty or just replicated index cannot become the leader of a shard after a leader goes down. (was: Solr Cloud recovery and leader election when unloading leader core) > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483445#comment-13483445 ] Shawn Heisey commented on SOLR-1972: With this as the contructor: {code} public RequestHandlerBase() { //numRequests = Metrics.newCounter(RequestHandlerBase.class, "numRequests", this.toString()); //numErrors = Metrics.newCounter(RequestHandlerBase.class, "numErrors", this.toString()); //numTimeouts = Metrics.newCounter(RequestHandlerBase.class, "numTimeouts", this.toString()); //requestTimes = Metrics.newTimer(RequestHandlerBase.class, "requestTimes", this.toString()); numRequests = Metrics.newCounter(RequestHandlerBase.class, "numRequests"); numErrors = Metrics.newCounter(RequestHandlerBase.class, "numErrors"); numTimeouts = Metrics.newCounter(RequestHandlerBase.class, "numTimeouts"); requestTimes = Metrics.newTimer(RequestHandlerBase.class, "requestTimes"); } {code} I get the following as the failure. I suppose I should be glad that it fails, but RequestsHandlerTests, which is the test that was modified, continues to pass. {code} [junit4:junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4:junit4] 2> 0 T297 oas.SolrTestCaseJ4.initCore initCore [junit4:junit4] 2> Creating dataDir: /index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087 [junit4:junit4] 2> 26 T297 oasc.SolrResourceLoader. new SolrResourceLoader for directory: '/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/' [junit4:junit4] 2> 27 T297 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/lib/classes/' to classloader [junit4:junit4] 2> 27 T297 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/lib/README' to classloader [junit4:junit4] 2> 59 T297 oasc.SolrConfig. Using Lucene MatchVersion: LUCENE_41 [junit4:junit4] 2> 98 T297 oasc.SolrConfig. Loaded SolrConfig: solrconfig.xml [junit4:junit4] 2> 98 T297 oass.IndexSchema.readSchema Reading Solr Schema [junit4:junit4] 2> 105 T297 oass.IndexSchema.readSchema Schema name=test [junit4:junit4] 2> 414 T297 oass.OpenExchangeRatesOrgProvider.init Initialized with rates=open-exchange-rates.json, refreshInterval=1440. [junit4:junit4] 2> 420 T297 oass.IndexSchema.readSchema default search field in schema is text [junit4:junit4] 2> 422 T297 oass.IndexSchema.readSchema unique key field: id [junit4:junit4] 2> 430 T297 oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml [junit4:junit4] 2> 432 T297 oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml [junit4:junit4] 2> 434 T297 oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json [junit4:junit4] 2> 435 T297 oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx) [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: /index/src/branch_4x/solr/build/solr-core/test-files/solr [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader. new SolrResourceLoader for directory: '/index/src/branch_4x/solr/build/solr-core/test-files/solr/' [junit4:junit4] 2> 442 T297 oasc.CoreContainer. New CoreContainer 1897850152 [junit4:junit4] 2> 442 T297 oasc.SolrCore. [collection1] Opening new SolrCore at /index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/, dataDir=/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/ [junit4:junit4] 2> 443 T297 oasc.JmxMonitoredMap. JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@201a970 [junit4:junit4] 2> 443 T297 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index/ [junit4:junit4] 2> 444 T297 oasc.SolrCore.initIndex WARNING [collection1] Solr index directory '/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index' doesn't exist. Creating new index... [junit4:junit4] 2> 460 T297 oasc.CachingDirectoryFactory.get return new directory for /index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index forceNew:false [junit4:junit4] 2> 462 T297 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits:num=1 [junit4:junit4] 2> commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@/index/src/branch_4x/solr/build/solr-core/test/J0/index41037410
[jira] [Comment Edited] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483445#comment-13483445 ] Shawn Heisey edited comment on SOLR-1972 at 10/24/12 6:22 PM: -- With this as the contructor: {code} public RequestHandlerBase() { //numRequests = Metrics.newCounter(RequestHandlerBase.class, "numRequests", this.toString()); //numErrors = Metrics.newCounter(RequestHandlerBase.class, "numErrors", this.toString()); //numTimeouts = Metrics.newCounter(RequestHandlerBase.class, "numTimeouts", this.toString()); //requestTimes = Metrics.newTimer(RequestHandlerBase.class, "requestTimes", this.toString()); numRequests = Metrics.newCounter(RequestHandlerBase.class, "numRequests"); numErrors = Metrics.newCounter(RequestHandlerBase.class, "numErrors"); numTimeouts = Metrics.newCounter(RequestHandlerBase.class, "numTimeouts"); requestTimes = Metrics.newTimer(RequestHandlerBase.class, "requestTimes"); } {code} I get the following as the failure. I suppose I should be glad that it fails, but RequestHandlersTests, which is the test that was modified, continues to pass. {code} [junit4:junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4:junit4] 2> 0 T297 oas.SolrTestCaseJ4.initCore initCore [junit4:junit4] 2> Creating dataDir: /index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087 [junit4:junit4] 2> 26 T297 oasc.SolrResourceLoader. new SolrResourceLoader for directory: '/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/' [junit4:junit4] 2> 27 T297 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/lib/classes/' to classloader [junit4:junit4] 2> 27 T297 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/lib/README' to classloader [junit4:junit4] 2> 59 T297 oasc.SolrConfig. Using Lucene MatchVersion: LUCENE_41 [junit4:junit4] 2> 98 T297 oasc.SolrConfig. Loaded SolrConfig: solrconfig.xml [junit4:junit4] 2> 98 T297 oass.IndexSchema.readSchema Reading Solr Schema [junit4:junit4] 2> 105 T297 oass.IndexSchema.readSchema Schema name=test [junit4:junit4] 2> 414 T297 oass.OpenExchangeRatesOrgProvider.init Initialized with rates=open-exchange-rates.json, refreshInterval=1440. [junit4:junit4] 2> 420 T297 oass.IndexSchema.readSchema default search field in schema is text [junit4:junit4] 2> 422 T297 oass.IndexSchema.readSchema unique key field: id [junit4:junit4] 2> 430 T297 oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml [junit4:junit4] 2> 432 T297 oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml [junit4:junit4] 2> 434 T297 oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json [junit4:junit4] 2> 435 T297 oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx) [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: /index/src/branch_4x/solr/build/solr-core/test-files/solr [junit4:junit4] 2> 436 T297 oasc.SolrResourceLoader. new SolrResourceLoader for directory: '/index/src/branch_4x/solr/build/solr-core/test-files/solr/' [junit4:junit4] 2> 442 T297 oasc.CoreContainer. New CoreContainer 1897850152 [junit4:junit4] 2> 442 T297 oasc.SolrCore. [collection1] Opening new SolrCore at /index/src/branch_4x/solr/build/solr-core/test-files/solr/collection1/, dataDir=/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/ [junit4:junit4] 2> 443 T297 oasc.JmxMonitoredMap. JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@201a970 [junit4:junit4] 2> 443 T297 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index/ [junit4:junit4] 2> 444 T297 oasc.SolrCore.initIndex WARNING [collection1] Solr index directory '/index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index' doesn't exist. Creating new index... [junit4:junit4] 2> 460 T297 oasc.CachingDirectoryFactory.get return new directory for /index/src/branch_4x/solr/build/solr-core/test/J0/./solrtest-MBeansHandlerTest-1351102494087/index forceNew:false [junit4:junit4] 2> 462 T297 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits:num=1 [junit4:junit4] 2> commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@/index/src/b
[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483449#comment-13483449 ] Alan Woodward commented on SOLR-1972: - There's a test bug in there - it's comparing the NamedList objects when it should be comparing their values. Will put up a patch in a bit... > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3987) Provide Collection API request results beyond manual inspection.
Mark Miller created SOLR-3987: - Summary: Provide Collection API request results beyond manual inspection. Key: SOLR-3987 URL: https://issues.apache.org/jira/browse/SOLR-3987 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Fix For: 4.1, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
TomShally created LUCENE-4504: - Summary: Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues Key: LUCENE-4504 URL: https://issues.apache.org/jira/browse/LUCENE-4504 Project: Lucene - Core Issue Type: Bug Components: modules/other Affects Versions: 4.0 Reporter: TomShally Priority: Minor IS.searchAfter() always returns an empty result when using FunctionValues for sorting. The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it should return +1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4504) Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues
[ https://issues.apache.org/jira/browse/LUCENE-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TomShally updated LUCENE-4504: -- Attachment: LUCENE-4504.patch Patch against trunk > Empty results from IndexSearcher.searchAfter() when sorting by FunctionValues > - > > Key: LUCENE-4504 > URL: https://issues.apache.org/jira/browse/LUCENE-4504 > Project: Lucene - Core > Issue Type: Bug > Components: modules/other >Affects Versions: 4.0 >Reporter: TomShally >Priority: Minor > Attachments: LUCENE-4504.patch > > > IS.searchAfter() always returns an empty result when using FunctionValues for > sorting. > The culprit is ValueSourceComparator.compareDocToValue() returning -1 when it > should return +1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483466#comment-13483466 ] Simon Willnauer commented on LUCENE-2878: - ALAN! you have no idea how happy I am that you picking this up again. I put a lot of work into this already and I really think we are close already. Only MultiTermSloppyPhrase doesn't work at this point and I honestly think we can just mark this as unsupported (what a crazy scorer) anyway. We really need to clean this stuff up and you basically did the first step towards this. +1 to commit! :) > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, > mentor > Fix For: Positions Branch > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, > PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3039) ExtendedDismaxQParser should allow for extension of parsing-related behavior
[ https://issues.apache.org/jira/browse/SOLR-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483476#comment-13483476 ] Danny Dvinov commented on SOLR-3039: All comments are more than welcome btw! > ExtendedDismaxQParser should allow for extension of parsing-related behavior > > > Key: SOLR-3039 > URL: https://issues.apache.org/jira/browse/SOLR-3039 > Project: Solr > Issue Type: Improvement > Components: query parsers >Affects Versions: 5.0 >Reporter: Danny Dvinov >Priority: Minor > Labels: edismax, parser, parsing > Fix For: 5.0 > > Attachments: SOLR-3039.patch, SOLR-3039.patch, SOLR-3039.patch, > SOLR-3039.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > ExtendedDismaxQParser.parse does not currently allow for things like query > pre-processing prior to its parsing, specifying the parser to be used, and > whether particular clause should be included in the query being parsed. > Furthermore, ExtendedDismaxQParser and inner ExtendedSolrQueryParser cannot > be subclassed. By resolving this issue, we'll provide a way for Solr > implementations to extend the parser and parsing related behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4503) MoreLikeThis supports multiple index readers.
[ https://issues.apache.org/jira/browse/LUCENE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483486#comment-13483486 ] Ying Andrews commented on LUCENE-4503: -- Thanks for pointing it out, Robert. In the application I worked on, we had to support a mix of local and remote searchers. Due to the large scale and heterogeneous nature of our systems we had to be able to search anything that implements "Searchable". We also had to take advantage of ParallelMultiSearcher to boost the performance. In a special case we had a ParallelMultiSearcher consisted of a group of local file indexes, a group of remote searchers whose data may come further from other remote searchers (kind like a tree) and one searcher that gets data from a SolrServer over the network. Therefore we had to adopt MultiSearcher instead of MultiReader strategy. We recently added MoreLikeThis feature into our heterogenous system. As you can see MultiReader is not an option in our environment. The links below roughly explains my situation. Thank you. http://lucene.472066.n3.nabble.com/MultiSearcher-vs-MultiReader-td546968.html http://mail-archives.apache.org/mod_mbox/lucene-java-user/200712.mbox/%3cof924d8f48.261c9541-onc22573a5.0077e70d-c22573a5.007a9...@il.ibm.com%3E > MoreLikeThis supports multiple index readers. > - > > Key: LUCENE-4503 > URL: https://issues.apache.org/jira/browse/LUCENE-4503 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ying Andrews >Priority: Minor > Labels: patch > Attachments: MoreLikeThis.java.patch > > Original Estimate: 72h > Remaining Estimate: 72h > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-139) Support updateable/modifiable documents
[ https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483495#comment-13483495 ] Mike commented on SOLR-139: --- Can we get this on the Wiki somewhere? I've been looking around, haven't been able to find it. Not sure where to put it... > Support updateable/modifiable documents > --- > > Key: SOLR-139 > URL: https://issues.apache.org/jira/browse/SOLR-139 > Project: Solr > Issue Type: New Feature > Components: update >Reporter: Ryan McKinley > Fix For: 4.1 > > Attachments: Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, getStoredFields.patch, getStoredFields.patch, > getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, > SOLR-139_createIfNotExist.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, > SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, > SOLR-139.patch, SOLR-139.patch, SOLR-139-XmlUpdater.patch, > SOLR-269+139-ModifiableDocumentUpdateProcessor.patch > > > It would be nice to be able to update some fields on a document without > having to insert the entire document. > Given the way lucene is structured, (for now) one can only modify stored > fields. > While we are at it, we can support incrementing an existing value - I think > this only makes sense for numbers. > for background, see: > http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-139) Support updateable/modifiable documents
[ https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483496#comment-13483496 ] Matt Altermatt commented on SOLR-139: -  I will be out of the office until the 29th of October. If you need immediate assistance, please contact IT Integration (itintegrat...@paml.com) or my manager Jon Tolley (jtol...@paml.com). Thanks. PAML EMAIL DISCLAIMER: Information contained in this message may be privileged and confidential. If the reader of this message is not the intended recipient, be notified that any dissemination, distribution or copying of this communication is strictly prohibited. If this communication is received in error, please notify the sender immediately by replying to the message and deleting from your computer. Thank you > Support updateable/modifiable documents > --- > > Key: SOLR-139 > URL: https://issues.apache.org/jira/browse/SOLR-139 > Project: Solr > Issue Type: New Feature > Components: update >Reporter: Ryan McKinley > Fix For: 4.1 > > Attachments: Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, > Eriks-ModifiableDocument.patch, getStoredFields.patch, getStoredFields.patch, > getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, > SOLR-139_createIfNotExist.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, > SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, > SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, > SOLR-139.patch, SOLR-139.patch, SOLR-139-XmlUpdater.patch, > SOLR-269+139-ModifiableDocumentUpdateProcessor.patch > > > It would be nice to be able to update some fields on a document without > having to insert the entire document. > Given the way lucene is structured, (for now) one can only modify stored > fields. > While we are at it, we can support incrementing an existing value - I think > this only makes sense for numbers. > for background, see: > http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3981) docBoost is compounded on copyField
[ https://issues.apache.org/jira/browse/SOLR-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483508#comment-13483508 ] Hoss Man commented on SOLR-3981: bq. that adoc() you are using doesnt work with boosts. (I found this from another test) Grr... thanks rmuir, never would have even thought to check that ... easy fix. bq. Applying the boosts once from all source fields for a given copyField destination seems a bit strange to me, but since it is old behaviour, I understand that it cannot be changed. right ... copyField has always copied the _field_ boosts, the bug here is the compounded docBoost. FWIW: we could add a ton more options to copyField to give more fine grained control over stuff like this as feature improvements if you'd like to file some Jiras for feature impreovements along those lines -- but personally i think: a) update processors make more sense for stuff like this; b) people to move away from doc/field boosts and start doing more with functions on numeric fields (and ultimately DocValues fields) where you have a lot more control of this stuff > docBoost is compounded on copyField > --- > > Key: SOLR-3981 > URL: https://issues.apache.org/jira/browse/SOLR-3981 > Project: Solr > Issue Type: Bug >Affects Versions: 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1 > > Attachments: SOLR-3981.patch, SOLR-3981.patch > > > As noted by Toke in a comment on SOLR-3875... > https://issues.apache.org/jira/browse/SOLR-3875?focusedCommentId=13482233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13482233 > {quote} > While boosting of multi-value fields is handled correctly in Solr 4.0.0, > boosting for copyFields are not. A sample document: > {code} > > Insane score Example. Score = 10E9 > Document boost broken for copyFields > video ThomasEgense and Toke Eskildsen > Test > bug > something else > bug > bug > > {code} > The fields name, manu, cat, features, keywords and content gets copied to > text and a search for thomasegense matches the text-field with query > explanation > {code} > 70384.67 = (MATCH) weight(text:thomasegense in 0) [DefaultSimilarity], result > of: > 70384.67 = fieldWeight in 0, product of: > 1.0 = tf(freq=1.0), with freq of: > 1.0 = termFreq=1.0 > 0.30685282 = idf(docFreq=1, maxDocs=1) > 229376.0 = fieldNorm(doc=0) > {code} > If the two last fields keywords and content are removed from the sample > document, the score is reduced by a factor 100 (docBoost^2). > {quote} > (This is a continuation of some of the problems caused by the changes made > when the concept of docBoost was eliminated from the underly IndexWRiter > code, and overlooked due to the lack of testing of docBoosts at the solr > level - SOLR-3885)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3988) SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots
Hoss Man created SOLR-3988: -- Summary: SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots Key: SOLR-3988 URL: https://issues.apache.org/jira/browse/SOLR-3988 Project: Solr Issue Type: Sub-task Affects Versions: 4.0, 3.6.1 Reporter: Hoss Man Fix For: 4.1 Discovered while writing a test for SOLR-3981. I intend to commit the fix as part of that issue, but creating a subtask to track it as a distinct bug since it may be affecting other users of the test-framework -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3981) docBoost is compounded on copyField
[ https://issues.apache.org/jira/browse/SOLR-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-3981: --- Attachment: SOLR-3981.patch updated patch to include fix for the test-harness. Still running exhaustive tests > docBoost is compounded on copyField > --- > > Key: SOLR-3981 > URL: https://issues.apache.org/jira/browse/SOLR-3981 > Project: Solr > Issue Type: Bug >Affects Versions: 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1 > > Attachments: SOLR-3981.patch, SOLR-3981.patch, SOLR-3981.patch > > > As noted by Toke in a comment on SOLR-3875... > https://issues.apache.org/jira/browse/SOLR-3875?focusedCommentId=13482233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13482233 > {quote} > While boosting of multi-value fields is handled correctly in Solr 4.0.0, > boosting for copyFields are not. A sample document: > {code} > > Insane score Example. Score = 10E9 > Document boost broken for copyFields > video ThomasEgense and Toke Eskildsen > Test > bug > something else > bug > bug > > {code} > The fields name, manu, cat, features, keywords and content gets copied to > text and a search for thomasegense matches the text-field with query > explanation > {code} > 70384.67 = (MATCH) weight(text:thomasegense in 0) [DefaultSimilarity], result > of: > 70384.67 = fieldWeight in 0, product of: > 1.0 = tf(freq=1.0), with freq of: > 1.0 = termFreq=1.0 > 0.30685282 = idf(docFreq=1, maxDocs=1) > 229376.0 = fieldNorm(doc=0) > {code} > If the two last fields keywords and content are removed from the sample > document, the score is reduced by a factor 100 (docBoost^2). > {quote} > (This is a continuation of some of the problems caused by the changes made > when the concept of docBoost was eliminated from the underly IndexWRiter > code, and overlooked due to the lack of testing of docBoosts at the solr > level - SOLR-3885)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3988) SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots
[ https://issues.apache.org/jira/browse/SOLR-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483535#comment-13483535 ] Robert Muir commented on SOLR-3988: --- This trapped me before too writing a similar test: would be great to fix it, it can easily cause a lot of wasted time! > SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots > --- > > Key: SOLR-3988 > URL: https://issues.apache.org/jira/browse/SOLR-3988 > Project: Solr > Issue Type: Sub-task >Affects Versions: 3.6.1, 4.0 >Reporter: Hoss Man > Fix For: 4.1 > > > Discovered while writing a test for SOLR-3981. I intend to commit the fix as > part of that issue, but creating a subtask to track it as a distinct bug > since it may be affecting other users of the test-framework -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated SOLR-1972: Attachment: SOLR-1972_metrics.patch Patch correcting the test bug (after an hour or so of swearing at Java object equality semantics...) > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, > SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3981) docBoost is compounded on copyField
[ https://issues.apache.org/jira/browse/SOLR-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483561#comment-13483561 ] Hoss Man commented on SOLR-3981: tests & precommit look good ... unless anyone spots any problems i'll commit later today. > docBoost is compounded on copyField > --- > > Key: SOLR-3981 > URL: https://issues.apache.org/jira/browse/SOLR-3981 > Project: Solr > Issue Type: Bug >Affects Versions: 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1 > > Attachments: SOLR-3981.patch, SOLR-3981.patch, SOLR-3981.patch > > > As noted by Toke in a comment on SOLR-3875... > https://issues.apache.org/jira/browse/SOLR-3875?focusedCommentId=13482233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13482233 > {quote} > While boosting of multi-value fields is handled correctly in Solr 4.0.0, > boosting for copyFields are not. A sample document: > {code} > > Insane score Example. Score = 10E9 > Document boost broken for copyFields > video ThomasEgense and Toke Eskildsen > Test > bug > something else > bug > bug > > {code} > The fields name, manu, cat, features, keywords and content gets copied to > text and a search for thomasegense matches the text-field with query > explanation > {code} > 70384.67 = (MATCH) weight(text:thomasegense in 0) [DefaultSimilarity], result > of: > 70384.67 = fieldWeight in 0, product of: > 1.0 = tf(freq=1.0), with freq of: > 1.0 = termFreq=1.0 > 0.30685282 = idf(docFreq=1, maxDocs=1) > 229376.0 = fieldNorm(doc=0) > {code} > If the two last fields keywords and content are removed from the sample > document, the score is reduced by a factor 100 (docBoost^2). > {quote} > (This is a continuation of some of the problems caused by the changes made > when the concept of docBoost was eliminated from the underly IndexWRiter > code, and overlooked due to the lack of testing of docBoosts at the solr > level - SOLR-3885)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483565#comment-13483565 ] Alan Woodward commented on LUCENE-2878: --- Heh, it was a long two weeks :-) As another step towards making the API prettier, I'd like to rename the queries: - OrderedConjunctionQuery => OrderedNearQuery - BrouwerianQuery => NonOverlappingQuery And maybe add an UnorderedNearQuery that just wraps a BooleanQuery and a WithinIntervalFilter. These names are probably a bit more intuitive to people unversed in IR theory... > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, > mentor > Fix For: Positions Branch > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, > PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483567#comment-13483567 ] Yonik Seeley commented on SOLR-3939: Trying to think if this could happen when there are versions too... say that instead of having no versions, we just have old versions from before we did the replication. This may argue for somehow marking the start of a replication in the transaction log and then never retrieving versions older than that. > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3788) core creation UI screen should redirect browser to details about newly created core
[ https://issues.apache.org/jira/browse/SOLR-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483574#comment-13483574 ] Mike commented on SOLR-3788: Hrm, two bugs: 1. The new core doesn't show up in the side bar after it's created, requiring a browser refresh. 2. If you run into the problem that Erick did (above), then refresh, you get a giant warning in your browser. > core creation UI screen should redirect browser to details about newly > created core > --- > > Key: SOLR-3788 > URL: https://issues.apache.org/jira/browse/SOLR-3788 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 4.0-BETA >Reporter: Hoss Man >Assignee: Stefan Matheis (steffkes) > Fix For: 4.1 > > Attachments: SOLR-3788.patch > > > got confused while testing SOLR-3679 because when you create a new SolrCore > using the Admin UI, the form goes away, and you are still looking at the > "core admin details" page for whatever SolrCore you were on when you clicked > the "Add Core" button -- it would be nice if the successful completion of hte > "Add Core" form would redirect you to the sub-page for the core you just > added. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483595#comment-13483595 ] Yonik Seeley commented on SOLR-3939: Thinking of some scenarios where this could happen: 1. R1,R2 both up and active, add docs 1,2,3 2. bring R2 down 3. add docs 4 through 1million 4. bring R2 up, peersync fails, replication is kicked off 5. R2 finishes replication and becomes active, but it's recent version still list 1,2,3 6. bring R1 down, R2 becomes the leader 7. bring R2 up, it does a peer-sync with R1, which looks like it has really old versions (and succeeds because of that) 8. if the leader (R2) does a peer-sync back with R1, it will fail (not sure of the consequences of this) Another variation... if there's an update between 6 and 7: 6.5. add doc 1million+1 This will cause recent versions of R2 to be 1,2,3,101 It would be good to verify that peersync to the leader will either fail (causing full replication), or pick up the new document. > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483600#comment-13483600 ] Mark Miller commented on SOLR-3939: --- Currently the leader does not peer sync back to a replica coming up because it would have to buffer updates. I think that if a replica is somehow ahead of the leader when coming back, peersync should fail and it should replicate. I think since this is not a common case, that is much simpler than trying to peersync back from the leder to the replica in this case. > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile
[ https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483614#comment-13483614 ] Shawn Heisey commented on SOLR-1972: You probably already knew this, but now the updated test fails properly for me when the constructor doesn't set the scope. I still get the failure in MBeansHandlerTest, which without knowledge of how all this stuff works internally, is really really mystifying. Since all solr tests pass when I don't monkey with the constructor, I guess it's not a big deal. Not that my vote really counts, but +1 for committing to 4x and trunk from me. Take out those enemy death cannons. > Need additional query stats in admin interface - median, 95th and 99th > percentile > - > > Key: SOLR-1972 > URL: https://issues.apache.org/jira/browse/SOLR-1972 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4 >Reporter: Shawn Heisey >Assignee: Alan Woodward >Priority: Minor > Fix For: 4.1 > > Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, > elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, > SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, > SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, > SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, > SOLR-1972-url_pattern.patch > > > I would like to see more detailed query statistics from the admin GUI. This > is what you can get now: > requests : 809 > errors : 0 > timeouts : 0 > totalTime : 70053 > avgTimePerRequest : 86.59209 > avgRequestsPerSecond : 0.8148785 > I'd like to see more data on the time per request - median, 95th percentile, > 99th percentile, and any other statistical function that makes sense to > include. In my environment, the first bunch of queries after startup tend to > take several seconds each. I find that the average value tends to be useless > until it has several thousand queries under its belt and the caches are > thoroughly warmed. The statistical functions I have mentioned would quickly > eliminate the influence of those initial slow queries. > The system will have to store individual data about each query. I don't know > if this is something Solr does already. It would be nice to have a > configurable count of how many of the most recent data points are kept, to > control the amount of memory the feature uses. The default value could be > something like 1024 or 4096. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483649#comment-13483649 ] Yonik Seeley commented on SOLR-3939: bq. Currently the leader does not peer sync back to a replica coming up because it would have to buffer updates. peer sync doesn't require buffering updates. AFAIK, we don't do that until we realize we need to replicate? > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3989) RuntimeException thrown by SolrZkClient should wrap cause, have a message, or be SolrException
Colin Bartolome created SOLR-3989: - Summary: RuntimeException thrown by SolrZkClient should wrap cause, have a message, or be SolrException Key: SOLR-3989 URL: https://issues.apache.org/jira/browse/SOLR-3989 Project: Solr Issue Type: Improvement Components: clients - java Affects Versions: 4.0 Reporter: Colin Bartolome In a few spots, but notably in the constructor for SolrZkClient, a try-catch block will catch Throwable and throw a new RuntimeException with no cause or message. Either the RuntimeException should wrap the Throwable that was caught, some sort of message should be added, or the type of the exception should be changed to SolrException so calling code can catch these exceptions without casting too broad of a net. Reproduce this by creating a CloudSolrServer that points to a URL that is valid, but has no server running: CloudSolrServer server = new CloudSolrServer("localhost:9983"); server.connect(); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.
[ https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483664#comment-13483664 ] Mark Miller commented on SOLR-3939: --- As far as I remember, if updates are coming in when you try and peer sync, we fail it? Isn't that what capturing the starting versions is all about? When a leader syncs with his replicas on leader election, we know docs are not coming in, so we don't worry about that starting versions check - but if you want to peer sync from the leader to a replica that is coming back up, if updates are coming in, you are going to force a replication anyway. Since it's already an uncommon case, it doesn't seem worth tackling. I mention buffering, because it seemed you would have to to be able to peer sync when updates are coming in (or block updates). > An empty or just replicated index cannot become the leader of a shard after a > leader goes down. > --- > > Key: SOLR-3939 > URL: https://issues.apache.org/jira/browse/SOLR-3939 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA, 4.0 >Reporter: Joel Bernstein >Assignee: Mark Miller >Priority: Critical > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch > > > When a leader core is unloaded using the core admin api, the followers in the > shard go into recovery but do not come out. Leader election doesn't take > place and the shard goes down. > This effects the ability to move a micro-shard from one Solr instance to > another Solr instance. > The problem does not occur 100% of the time but a large % of the time. > To setup a test, startup Solr Cloud with a single shard. Add cores to that > shard as replicas using core admin. Then unload the leader core using core > admin. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3981) docBoost is compounded on copyField
[ https://issues.apache.org/jira/browse/SOLR-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man resolved SOLR-3981. Resolution: Fixed Fix Version/s: 5.0 Committed revision 1401916. - trunk Committed revision 1401920. - 4x > docBoost is compounded on copyField > --- > > Key: SOLR-3981 > URL: https://issues.apache.org/jira/browse/SOLR-3981 > Project: Solr > Issue Type: Bug >Affects Versions: 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1, 5.0 > > Attachments: SOLR-3981.patch, SOLR-3981.patch, SOLR-3981.patch > > > As noted by Toke in a comment on SOLR-3875... > https://issues.apache.org/jira/browse/SOLR-3875?focusedCommentId=13482233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13482233 > {quote} > While boosting of multi-value fields is handled correctly in Solr 4.0.0, > boosting for copyFields are not. A sample document: > {code} > > Insane score Example. Score = 10E9 > Document boost broken for copyFields > video ThomasEgense and Toke Eskildsen > Test > bug > something else > bug > bug > > {code} > The fields name, manu, cat, features, keywords and content gets copied to > text and a search for thomasegense matches the text-field with query > explanation > {code} > 70384.67 = (MATCH) weight(text:thomasegense in 0) [DefaultSimilarity], result > of: > 70384.67 = fieldWeight in 0, product of: > 1.0 = tf(freq=1.0), with freq of: > 1.0 = termFreq=1.0 > 0.30685282 = idf(docFreq=1, maxDocs=1) > 229376.0 = fieldNorm(doc=0) > {code} > If the two last fields keywords and content are removed from the sample > document, the score is reduced by a factor 100 (docBoost^2). > {quote} > (This is a continuation of some of the problems caused by the changes made > when the concept of docBoost was eliminated from the underly IndexWRiter > code, and overlooked due to the lack of testing of docBoosts at the solr > level - SOLR-3885)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3988) SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots
[ https://issues.apache.org/jira/browse/SOLR-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man resolved SOLR-3988. Resolution: Fixed Fix Version/s: 5.0 Assignee: Hoss Man Committed revision 1401916. - trunk Committed revision 1401920. - 4x > SolrTestCaseJ4.adoc(SolrInputDocument) ignores field & docboots > --- > > Key: SOLR-3988 > URL: https://issues.apache.org/jira/browse/SOLR-3988 > Project: Solr > Issue Type: Sub-task >Affects Versions: 3.6.1, 4.0 >Reporter: Hoss Man >Assignee: Hoss Man > Fix For: 4.1, 5.0 > > > Discovered while writing a test for SOLR-3981. I intend to commit the fix as > part of that issue, but creating a subtask to track it as a distinct bug > since it may be affecting other users of the test-framework -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-880) SolrCore should have a a lazy startup option
[ https://issues.apache.org/jira/browse/SOLR-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-880: Attachment: SOLR-880.patch New version. Removed TODOs and my initials. > Got rid of the extra test directory that I wasn't happy with anyway. > Took a whack at returning SolrExceptions from CoreContainer. This required > that I change a number of tests, I'd particularly appreciate anyone looking > at that whole thing. > All tests pass, I'll commit this in a couple of days if nobody objects. > SolrCore should have a a lazy startup option > > > Key: SOLR-880 > URL: https://issues.apache.org/jira/browse/SOLR-880 > Project: Solr > Issue Type: Improvement > Components: multicore >Reporter: Noble Paul >Assignee: Erick Erickson > Attachments: SOLR-880.patch, SOLR-880.patch > > > * a core should have an option of loadOnStartup=true|false. default should be > true > If there are too many cores (tens of thousands) where each of them may be > used occassionally, we should not load all of them at once. In the runtime I > should be able to STOP and START a core on demand. A listing command would > let me know which one is present and what is up and what is down. A stopped > core must not use any resource -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server
[ https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483783#comment-13483783 ] Mark Miller commented on SOLR-3920: --- I started hitting this rarely in a test while working on another issue - I added to the test so that it would catch a problem here. Fix coming soon. > CloudSolrServer doesn't allow to index multiple collections with one instance > of server > --- > > Key: SOLR-3920 > URL: https://issues.apache.org/jira/browse/SOLR-3920 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA >Reporter: Grzegorz Sobczyk >Assignee: Mark Miller > Fix For: 4.1, 5.0 > > > With one instance of CloudSolrServer I can't add documents to multiple > collections, for example: > {code} > @Test > public void shouldSendToSecondCore() throws Exception { > //given > try { > CloudSolrServer server = new CloudSolrServer("localhost:9983"); > UpdateRequest commit1 = new UpdateRequest(); > commit1.setAction(ACTION.COMMIT, true, true); > commit1.setParam("collection", "collection1"); > //this commit is bug's cause > commit1.process(server); > > SolrInputDocument doc = new SolrInputDocument(); > doc.addField("id", "id"); > doc.addField("name", "name"); > > UpdateRequest update2 = new UpdateRequest(); > update2.setParam("collection", "collection2"); > update2.add(doc); > update2.process(server); > > UpdateRequest commit2 = new UpdateRequest(); > commit2.setAction(ACTION.COMMIT, true, true); > commit2.setParam("collection", "collection2"); > commit2.process(server); > SolrQuery q1 = new SolrQuery("id:id"); > q1.set("collection", "collection1"); > SolrQuery q2 = new SolrQuery("id:id"); > q2.set("collection", "collection2"); > > //when > QueryResponse resp1 = server.query(q1); > QueryResponse resp2 = server.query(q2); > > //then > Assert.assertEquals(0L, resp1.getResults().getNumFound()); > Assert.assertEquals(1L, resp2.getResults().getNumFound()); > } finally { > CloudSolrServer server1 = new CloudSolrServer("localhost:9983"); > server1.setDefaultCollection("collection1"); > server1.deleteByQuery("id:id"); > server1.commit(true, true); > > CloudSolrServer server2 = new CloudSolrServer("localhost:9983"); > server2.setDefaultCollection("collection2"); > server2.deleteByQuery("id:id"); > server2.commit(true, true); > } > } > {code} > Second update goes to first collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server
[ https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3920: -- Labels: 4.0.1_Candidate (was: ) > CloudSolrServer doesn't allow to index multiple collections with one instance > of server > --- > > Key: SOLR-3920 > URL: https://issues.apache.org/jira/browse/SOLR-3920 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.0-BETA >Reporter: Grzegorz Sobczyk >Assignee: Mark Miller > Labels: 4.0.1_Candidate > Fix For: 4.1, 5.0 > > > With one instance of CloudSolrServer I can't add documents to multiple > collections, for example: > {code} > @Test > public void shouldSendToSecondCore() throws Exception { > //given > try { > CloudSolrServer server = new CloudSolrServer("localhost:9983"); > UpdateRequest commit1 = new UpdateRequest(); > commit1.setAction(ACTION.COMMIT, true, true); > commit1.setParam("collection", "collection1"); > //this commit is bug's cause > commit1.process(server); > > SolrInputDocument doc = new SolrInputDocument(); > doc.addField("id", "id"); > doc.addField("name", "name"); > > UpdateRequest update2 = new UpdateRequest(); > update2.setParam("collection", "collection2"); > update2.add(doc); > update2.process(server); > > UpdateRequest commit2 = new UpdateRequest(); > commit2.setAction(ACTION.COMMIT, true, true); > commit2.setParam("collection", "collection2"); > commit2.process(server); > SolrQuery q1 = new SolrQuery("id:id"); > q1.set("collection", "collection1"); > SolrQuery q2 = new SolrQuery("id:id"); > q2.set("collection", "collection2"); > > //when > QueryResponse resp1 = server.query(q1); > QueryResponse resp2 = server.query(q2); > > //then > Assert.assertEquals(0L, resp1.getResults().getNumFound()); > Assert.assertEquals(1L, resp2.getResults().getNumFound()); > } finally { > CloudSolrServer server1 = new CloudSolrServer("localhost:9983"); > server1.setDefaultCollection("collection1"); > server1.deleteByQuery("id:id"); > server1.commit(true, true); > > CloudSolrServer server2 = new CloudSolrServer("localhost:9983"); > server2.setDefaultCollection("collection2"); > server2.deleteByQuery("id:id"); > server2.commit(true, true); > } > } > {code} > Second update goes to first collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.6.0_35) - Build # 1978 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1978/ Java: 64bit/jdk1.6.0_35 -XX:+UseSerialGC All tests passed Build Log: [...truncated 23898 lines...] BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:60: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:235: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1578: Tidy was unable to process file /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/docs/analyzers-common/constant-values.html, 76 returned. Total time: 27 minutes 9 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 64bit/jdk1.6.0_35 -XX:+UseSerialGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.6.0_35) - Build # 1978 - Failure!
java6 basically generates totally bogus html here. My other checker has issues too. I will just declare documentation-lint unsupported: this jdk has too many bugs. On Wed, Oct 24, 2012 at 11:24 PM, Policeman Jenkins Server wrote: > Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1978/ > Java: 64bit/jdk1.6.0_35 -XX:+UseSerialGC > > All tests passed > > Build Log: > [...truncated 23898 lines...] > BUILD FAILED > /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:60: The > following error occurred while executing this line: > /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:235: The > following error occurred while executing this line: > /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1578: > Tidy was unable to process file > /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/docs/analyzers-common/constant-values.html, > 76 returned. > > Total time: 27 minutes 9 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > Recording test results > Description set: Java: 64bit/jdk1.6.0_35 -XX:+UseSerialGC > Email was triggered for: Failure > Sending email for trigger: Failure > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.6.0_35) - Build # 1294 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1294/ Java: 64bit/jdk1.6.0_35 -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 23909 lines...] BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:235: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1578: Tidy was unable to process file C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\docs\analyzers-common\constant-values.html, 76 returned. Total time: 53 minutes 56 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 64bit/jdk1.6.0_35 -XX:+UseConcMarkSweepGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 807 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/807/ All tests passed Build Log: [...truncated 23841 lines...] BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:60: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/build.xml:235: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/common-build.xml:1578: Tidy was unable to process file /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/build/docs/analyzers-common/constant-values.html, 82 returned. Total time: 50 minutes 30 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured
Shawn Heisey created SOLR-3990: -- Summary: index size unavailable in gui/mbeans unless replication handler configured Key: SOLR-3990 URL: https://issues.apache.org/jira/browse/SOLR-3990 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0 Reporter: Shawn Heisey Priority: Minor Fix For: 4.1 Unless you configure the replication handler, the on-disk size of each core's index seems to be unavailable in the gui or from the mbeans handler. If you are not doing replication, you should still be able to get the size of each index without configuring things that won't be used. Also, I would like to get the size of the index in a consistent unit of measurement, probably MB. I understand the desire to give people a human readable unit next to a number that's not enormous, but it's difficult to do programmatic comparisons between values such as 787.33 MB and 23.56 GB. That may mean that the number needs to be available twice, one format to be shown in the admin GUI and both formats available from the mbeans handler, for scripting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1292 - Failure!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1292/ Java: 32bit/jdk1.7.0_07 -server -XX:+UseG1GC All tests passed Build Log: [...truncated 24464 lines...] BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:60: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build.xml:235: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2271) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) at java.io.BufferedWriter.write(BufferedWriter.java:230) at java.io.PrintWriter.write(PrintWriter.java:456) at java.io.PrintWriter.write(PrintWriter.java:473) at java.io.PrintWriter.print(PrintWriter.java:603) at java.io.PrintWriter.println(PrintWriter.java:739) at org.w3c.tidy.Report.printMessage(Report.java:754) at org.w3c.tidy.Report.attrError(Report.java:1171) at org.w3c.tidy.AttrCheckImpl$CheckName.check(AttrCheckImpl.java:843) at org.w3c.tidy.AttVal.checkAttribute(AttVal.java:265) at org.w3c.tidy.Node.checkAttributes(Node.java:343) at org.w3c.tidy.TagCheckImpl$CheckAnchor.check(TagCheckImpl.java:489) at org.w3c.tidy.Lexer.getToken(Lexer.java:2431) at org.w3c.tidy.ParserImpl$ParseBlock.parse(ParserImpl.java:2051) at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203) at org.w3c.tidy.ParserImpl$ParseBody.parse(ParserImpl.java:971) at org.w3c.tidy.ParserImpl.parseTag(ParserImpl.java:203) at org.w3c.tidy.ParserImpl$ParseHTML.parse(ParserImpl.java:483) at org.w3c.tidy.ParserImpl.parseDocument(ParserImpl.java:3401) at org.w3c.tidy.Tidy.parse(Tidy.java:433) at org.w3c.tidy.Tidy.parse(Tidy.java:263) at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457) at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420) at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364) Total time: 56 minutes 3 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_07 -server -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_07) - Build # 1295 - Still Failing!
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1295/ Java: 32bit/jdk1.7.0_07 -client -XX:+UseSerialGC All tests passed Build Log: [...truncated 24607 lines...] BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:60: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:235: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1577: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2271) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) at java.io.BufferedWriter.write(BufferedWriter.java:230) at java.io.PrintWriter.write(PrintWriter.java:456) at java.io.PrintWriter.write(PrintWriter.java:473) at java.io.PrintWriter.print(PrintWriter.java:603) at java.io.PrintWriter.println(PrintWriter.java:739) at org.w3c.tidy.Report.printMessage(Report.java:754) at org.w3c.tidy.Report.errorSummary(Report.java:1572) at org.w3c.tidy.Tidy.parse(Tidy.java:608) at org.w3c.tidy.Tidy.parse(Tidy.java:263) at org.w3c.tidy.ant.JTidyTask.processFile(JTidyTask.java:457) at org.w3c.tidy.ant.JTidyTask.executeSet(JTidyTask.java:420) at org.w3c.tidy.ant.JTidyTask.execute(JTidyTask.java:364) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) Total time: 50 minutes 42 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseSerialGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-3991) SOLR stuck on initialization with warmup and spellcheck collation on for /select handler
Alexey Kudinov created SOLR-3991: Summary: SOLR stuck on initialization with warmup and spellcheck collation on for /select handler Key: SOLR-3991 URL: https://issues.apache.org/jira/browse/SOLR-3991 Project: Solr Issue Type: Bug Components: SearchComponents - other, spellchecker Affects Versions: 4.0 Environment: Windows 7/Tomcat 6 Reporter: Alexey Kudinov The main thread calls replicationhandler getStatistics() which in turn tries to get searcher and waits. In the meanwhile, warmup is triggered and query runs. If spell check is defined for query component, and collation is on, collation executor also tries to fetch the searcher and creates a deadlock. To replay: 1. Define the warmup query 2. Add spell checker configuration to the /select search handler 3. Set spellcheck.collation = true Configuration: zkRun collection1 2 shards 1 node 4 cores -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org