[jira] [Commented] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035869#comment-15035869 ] Shalin Shekhar Mangar commented on SOLR-8339: - bq. With this change, can we remove org.apache.solr.client.solrj.util.ClientUtils#toSolrInputDocument method? And org.apache.solr.client.solrj.util.ClientUtils#toSolrDocument ? We can deprecate those in 5.x and remove in trunk. [~ichattopadhyaya] - SolrDocument used to be serializable but you have removed that in your patch. Same for SolrInputDocument. That is a compat-break. Can you put that back? > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Bug >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)
[ https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035905#comment-15035905 ] Robert Muir commented on LUCENE-6914: - looks buggy indeed. Can we also add a simple test (ퟙxퟡxퟠxퟜ -> 1x9x8x4) ? > DecimalDigitFilter skips characters in some cases (supplemental?) > - > > Key: LUCENE-6914 > URL: https://issues.apache.org/jira/browse/LUCENE-6914 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.4 >Reporter: Hoss Man > Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch > > > Found this while writing up the solr ref guide for DecimalDigitFilter. > With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, > double struck 9, 8, double struck 4) add some non-decimal characters in > between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output > ("1x9x8x4"). This doesn't affect all decimal characters though, as evident > by the existing test cases. > Perhaps this is an off by one bug in the "if the original was supplementary, > shrink the string" code path? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-8339. - Resolution: Fixed Thanks Ishan! > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 5.5 > > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch, > SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-8339: Fix Version/s: 5.5 Trunk Issue Type: Improvement (was: Bug) > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 5.5 > > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch, > SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036034#comment-15036034 ] ASF subversion and git services commented on SOLR-8339: --- Commit 1717657 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1717657 ] SOLR-8339: Refactor SolrDocument and SolrInputDocument to have a common base abstract class called SolrDocumentBase. Deprecated methods toSolrInputDocument and toSolrDocument in ClientUtils. > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Bug >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch, > SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14802 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14802/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Doc with id=2 not found in http://127.0.0.1:33823/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=2 not found in http://127.0.0.1:33823/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([BAA3ED3C54BFC05F:32F7D2E6FA43ADA7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:620) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:575) at org.apache.solr.cloud.HttpPartitionTest.testLeaderZkSessionLoss(HttpPartitionTest.java:523) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:120) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Created] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
Alan Woodward created LUCENE-6916: - Summary: BaseDirectoryTestCase should use try-with-resources for its Directories Key: LUCENE-6916 URL: https://issues.apache.org/jira/browse/LUCENE-6916 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor I'm playing around with writing a nio2 FileSystem implementation for HDFS that will work with Directory, and it currently leaks threads everywhere because if a BaseDirectoryTestCase test fails it doesn't close its Directory. This obviously won't be a problem if everything passes, but it will probably be a while before that's true and it makes iterative development a bit of a pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5868) JoinUtil support for NUMERIC docValues fields
[ https://issues.apache.org/jira/browse/LUCENE-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-5868: - Attachment: LUCENE-5868.patch [^LUCENE-5868.patch] It's done, after all. I had to tweak {{TestJoinUtil}} for random tests. Now it generates values ordered similarly for strings, and numbers. It also dedupes values for {{from}}, {{to}} fields, because numeric docvalues store duplicates and it impact scoring in tests. Now, there is no "simple" test coverage for numeric join. I don't think it's necessary, perhaps I'll cover it in simple Solr case. I want to commit it *next week* into *trunk* and *5.x* to let it out in *5.5*. Please let me know if you wish to veto it. Reviews and ideas are welcome as usual!! > JoinUtil support for NUMERIC docValues fields > -- > > Key: LUCENE-5868 > URL: https://issues.apache.org/jira/browse/LUCENE-5868 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Fix For: 5.5 > > Attachments: LUCENE-5868-lambdarefactoring.patch, > LUCENE-5868-lambdarefactoring.patch, LUCENE-5868.patch, LUCENE-5868.patch, > LUCENE-5868.patch, LUCENE-5868.patch, qtj.diff > > > while polishing SOLR-6234 I found that JoinUtil can't join int dv fields at > least. > I plan to provide test/patch. It might be important, because Solr's join can > do that. Please vote if you care! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-8339: --- Attachment: SOLR-8339.patch Thanks [~shalinmangar]. Added the Serializable to the base class. > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Bug >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch, > SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
[ https://issues.apache.org/jira/browse/LUCENE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035901#comment-15035901 ] Robert Muir commented on LUCENE-6916: - I'm not sure we should do this, it makes test cases significantly harder to read, but at what benefit? We should not be failing on file leaks if the test already fails. > BaseDirectoryTestCase should use try-with-resources for its Directories > --- > > Key: LUCENE-6916 > URL: https://issues.apache.org/jira/browse/LUCENE-6916 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Attachments: LUCENE-6916.patch > > > I'm playing around with writing a nio2 FileSystem implementation for HDFS > that will work with Directory, and it currently leaks threads everywhere > because if a BaseDirectoryTestCase test fails it doesn't close its Directory. > This obviously won't be a problem if everything passes, but it will probably > be a while before that's true and it makes iterative development a bit of a > pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)
[ https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035911#comment-15035911 ] Robert Muir commented on LUCENE-6914: - or maybe the doubleStruck test is good enough? i guess the only oddity is, it actually has whitespace in between the characters, but at a glance its hard to tell its not e.g. full-width digits. Maybe we should just replace the spaces with x's. > DecimalDigitFilter skips characters in some cases (supplemental?) > - > > Key: LUCENE-6914 > URL: https://issues.apache.org/jira/browse/LUCENE-6914 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.4 >Reporter: Hoss Man > Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch > > > Found this while writing up the solr ref guide for DecimalDigitFilter. > With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, > double struck 9, 8, double struck 4) add some non-decimal characters in > between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output > ("1x9x8x4"). This doesn't affect all decimal characters though, as evident > by the existing test cases. > Perhaps this is an off by one bug in the "if the original was supplementary, > shrink the string" code path? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
[ https://issues.apache.org/jira/browse/LUCENE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035909#comment-15035909 ] Alan Woodward commented on LUCENE-6916: --- It's FileSystem leaks rather than file leaks for me, but it's a fairly unusual case I know. It doesn't seem harder to read to me, but I guess that's subjective anyway :-) I can always just copy the test into my own project and make the changes there if it makes things more difficult for everybody else. > BaseDirectoryTestCase should use try-with-resources for its Directories > --- > > Key: LUCENE-6916 > URL: https://issues.apache.org/jira/browse/LUCENE-6916 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Attachments: LUCENE-6916.patch > > > I'm playing around with writing a nio2 FileSystem implementation for HDFS > that will work with Directory, and it currently leaks threads everywhere > because if a BaseDirectoryTestCase test fails it doesn't close its Directory. > This obviously won't be a problem if everything passes, but it will probably > be a while before that's true and it makes iterative development a bit of a > pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface
[ https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036009#comment-15036009 ] ASF subversion and git services commented on SOLR-8339: --- Commit 1717654 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1717654 ] SOLR-8339: Refactor SolrDocument and SolrInputDocument to have a common base abstract class called SolrDocumentBase. Deprecated methods toSolrInputDocument and toSolrDocument in ClientUtils. > SolrDocument and SolrInputDocument should have a common interface > - > > Key: SOLR-8339 > URL: https://issues.apache.org/jira/browse/SOLR-8339 > Project: Solr > Issue Type: Bug >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: SOLR-8339.patch, SOLR-8339.patch, SOLR-8339.patch, > SOLR-8339.patch > > > Currently, both share a Map interface (SOLR-928). However, there are many > common methods like createField(), setField() etc. that should perhaps go > into an interface/abstract class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
[ https://issues.apache.org/jira/browse/LUCENE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-6916: -- Attachment: LUCENE-6916.patch Patch. I've only wrapped Directory usage with try-with-resources for now, but it could be extended to all the input and outputstreams as well. > BaseDirectoryTestCase should use try-with-resources for its Directories > --- > > Key: LUCENE-6916 > URL: https://issues.apache.org/jira/browse/LUCENE-6916 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Attachments: LUCENE-6916.patch > > > I'm playing around with writing a nio2 FileSystem implementation for HDFS > that will work with Directory, and it currently leaks threads everywhere > because if a BaseDirectoryTestCase test fails it doesn't close its Directory. > This obviously won't be a problem if everything passes, but it will probably > be a while before that's true and it makes iterative development a bit of a > pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
[ https://issues.apache.org/jira/browse/LUCENE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035919#comment-15035919 ] Robert Muir commented on LUCENE-6916: - Well, my issue is that: * try-with definitely adds complexity, its like try/catch but with more complexity on top. * its "optimizing for a failure case" * it is not an issue for any directories in lucene. * once its done here, i am worried about a precedent, where every single lucene test must be in try-with. > BaseDirectoryTestCase should use try-with-resources for its Directories > --- > > Key: LUCENE-6916 > URL: https://issues.apache.org/jira/browse/LUCENE-6916 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Attachments: LUCENE-6916.patch > > > I'm playing around with writing a nio2 FileSystem implementation for HDFS > that will work with Directory, and it currently leaks threads everywhere > because if a BaseDirectoryTestCase test fails it doesn't close its Directory. > This obviously won't be a problem if everything passes, but it will probably > be a while before that's true and it makes iterative development a bit of a > pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6916) BaseDirectoryTestCase should use try-with-resources for its Directories
[ https://issues.apache.org/jira/browse/LUCENE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035927#comment-15035927 ] Robert Muir commented on LUCENE-6916: - And by the way, I am ok with the change if its contained to just this one file: because its purpose is to test directories. But I don't think we should do this across all tests in general, it has a real cost. > BaseDirectoryTestCase should use try-with-resources for its Directories > --- > > Key: LUCENE-6916 > URL: https://issues.apache.org/jira/browse/LUCENE-6916 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Attachments: LUCENE-6916.patch > > > I'm playing around with writing a nio2 FileSystem implementation for HDFS > that will work with Directory, and it currently leaks threads everywhere > because if a BaseDirectoryTestCase test fails it doesn't close its Directory. > This obviously won't be a problem if everything passes, but it will probably > be a while before that's true and it makes iterative development a bit of a > pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8359) Restrict child classes from using parent logger's state
Mike Drob created SOLR-8359: --- Summary: Restrict child classes from using parent logger's state Key: SOLR-8359 URL: https://issues.apache.org/jira/browse/SOLR-8359 Project: Solr Issue Type: Sub-task Reporter: Mike Drob In SOLR-8330 we split up a lot of loggers. However, there are a few classes that still use their parent's logging state and configuration indirectly. {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class cached read of {{boolean debug = log.isDebugEnabled()}}, when they should check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3
[ https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036185#comment-15036185 ] Joakim Erdfelt commented on SOLR-7339: -- The Jetty Project developers took a look at this and ... If you turn on DEBUG logging for Jetty, you'll see that the test does not send correct data to the server. The request that fails sends data using chunked encoding, but sends one chunk of length 0x59, but does not send the terminal chunk. The logs show Jetty reporting that the processing is complete but with unconsumed input, and therefore closes the connection. Could this be a Solr client bug in disguise? Look at the attached SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng Filter: tcp.stream eq 4 POST /solr/collection1/update?wt=javabin=2 HTTP/1.1 User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0 Content-Type: application/javabin Transfer-Encoding: chunked Host: 127.0.0.1:52488 Connection: Keep-Alive 59 ...'docsMap.?"id)_version_4.'price_f ..#setT... HTTP/1.1 409 Conflict Date: Tue, 01 Dec 2015 19:44:38 GMT Content-Type: application/octet-stream Content-Length: 144 Server: Jetty(9.3.6.v20151106) responseHeader..%QTimeP..%error..#msg?4version conflict for unique expected=1519385657324077057 actual=1519385657324077056.$codeY. > Upgrade Jetty from 9.2 to 9.3 > - > > Key: SOLR-7339 > URL: https://issues.apache.org/jira/browse/SOLR-7339 > Project: Solr > Issue Type: Improvement >Reporter: Gregg Donovan >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 6.0 > > Attachments: SOLR-7339.patch, SOLR-7339.patch, > SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, > SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng > > > Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor > SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] > and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx]. > Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are: > * multiplexing requests over a single TCP connection ("streams") > * canceling a single request without closing the TCP connection > * removing [head-of-line > blocking|https://http2.github.io/faq/#why-is-http2-multiplexed] > * header compression > Caveats: > * Jetty 9.3 is at M2, not released. > * Full Solr support for HTTP/2 would require more work than just upgrading > Jetty. The server configuration would need to change and a new HTTP client > ([Jetty's own > client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], > [Square's OkHttp|http://square.github.io/okhttp/], > [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need > to be selected and wired up. Perhaps this is worthy of a branch? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields
[ https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036219#comment-15036219 ] Ishan Chattopadhyaya commented on SOLR-8220: bq. I've updated the patch to now check for the schema version to be >=1.6 As per an offline discussion with [~hossman], he mentioned that in the past we've never tied runtime behaviour with specific schema versions. He suggested, as did Yonik above I think, that we use some attribute like docValueAsIfStored=true/false (with a default of false for previous schema versions). I'll try to tackle this and update the patch, and drop the naive check for schema version. > Read field from docValues for non stored fields > --- > > Key: SOLR-8220 > URL: https://issues.apache.org/jira/browse/SOLR-8220 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban > Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, > SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch > > > Many times a value will be both stored="true" and docValues="true" which > requires redundant data to be stored on disk. Since reading from docValues is > both efficient and a common practice (facets, analytics, streaming, etc), > reading values from docValues when a stored version of the field does not > exist would be a valuable disk usage optimization. > The only caveat with this that I can see would be for multiValued fields as > they would always be returned sorted in the docValues approach. I believe > this is a fair compromise. > I've done a rough implementation for this as a field transform, but I think > it should live closer to where stored fields are loaded in the > SolrIndexSearcher. > Two open questions/observations: > 1) There doesn't seem to be a standard way to read values for docValues, > facets, analytics, streaming, etc, all seem to be doing their own ways, > perhaps some of this logic should be centralized. > 2) What will the API behavior be? (Below is my proposed implementation) > Parameters for fl: > - fl="docValueField" > -- return field from docValue if the field is not stored and in docValues, > if the field is stored return it from stored fields > - fl="*" > -- return only stored fields > - fl="+" >-- return stored fields and docValue fields > 2a - would be easiest implementation and might be sufficient for a first > pass. 2b - is current behavior -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036334#comment-15036334 ] Jason Gerlowski commented on SOLR-8359: --- Sure, good catch Mike, I didn't even think of this. I did a quick grep around for other instances of this. So add {{CdcrTransactionLog}} to the list. Hopefully we're not missing any other classes here. Should anything go in CHANGES.txt for this JIRA? I lean towards "no", since this is just fallout from another change that's already mentioned in CHANGES.txt. But I'm not very familiar with when/how/why things get added to that file, so maybe I've got the wrong approach. I'm running tests on a patch now, will upload it shortly if all goes well. > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 225 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/225/ Java: multiarch/jdk1.7.0 -d32 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.update.AutoCommitTest.testCommitWithin Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([B41A710D852CCB71:EC81E7506022564]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:749) at org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:326) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound=1] xml response was: 00 request was:rows=20=standard=2.2=0=id:529 at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:742) ... 40 more Build Log: [...truncated 11038 lines...] [junit4] Suite: org.apache.solr.update.AutoCommitTest
[jira] [Comment Edited] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3
[ https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036185#comment-15036185 ] Joakim Erdfelt edited comment on SOLR-7339 at 12/2/15 5:25 PM: --- The Jetty Project developers took a look at this and ... If you turn on DEBUG logging for Jetty, you'll see that the test does not send correct data to the server. The request that fails sends data using chunked encoding, but sends one chunk of length 0x59, but does not send the terminal chunk. The logs show Jetty reporting that the processing is complete but with unconsumed input, and therefore closes the connection. Could this be a Solr client bug in disguise? Look at the attached SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng Filter: tcp.stream eq 4 {code} POST /solr/collection1/update?wt=javabin=2 HTTP/1.1 User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0 Content-Type: application/javabin Transfer-Encoding: chunked Host: 127.0.0.1:52488 Connection: Keep-Alive 59 ...'docsMap.?"id)_version_4.'price_f ..#setT... HTTP/1.1 409 Conflict Date: Tue, 01 Dec 2015 19:44:38 GMT Content-Type: application/octet-stream Content-Length: 144 Server: Jetty(9.3.6.v20151106) responseHeader..%QTimeP..%error..#msg?4version conflict for unique expected=1519385657324077057 actual=1519385657324077056.$codeY. {code} was (Author: joakime): The Jetty Project developers took a look at this and ... If you turn on DEBUG logging for Jetty, you'll see that the test does not send correct data to the server. The request that fails sends data using chunked encoding, but sends one chunk of length 0x59, but does not send the terminal chunk. The logs show Jetty reporting that the processing is complete but with unconsumed input, and therefore closes the connection. Could this be a Solr client bug in disguise? Look at the attached SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng Filter: tcp.stream eq 4 POST /solr/collection1/update?wt=javabin=2 HTTP/1.1 User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0 Content-Type: application/javabin Transfer-Encoding: chunked Host: 127.0.0.1:52488 Connection: Keep-Alive 59 ...'docsMap.?"id)_version_4.'price_f ..#setT... HTTP/1.1 409 Conflict Date: Tue, 01 Dec 2015 19:44:38 GMT Content-Type: application/octet-stream Content-Length: 144 Server: Jetty(9.3.6.v20151106) responseHeader..%QTimeP..%error..#msg?4version conflict for unique expected=1519385657324077057 actual=1519385657324077056.$codeY. > Upgrade Jetty from 9.2 to 9.3 > - > > Key: SOLR-7339 > URL: https://issues.apache.org/jira/browse/SOLR-7339 > Project: Solr > Issue Type: Improvement >Reporter: Gregg Donovan >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 6.0 > > Attachments: SOLR-7339.patch, SOLR-7339.patch, > SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, > SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng > > > Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor > SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] > and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx]. > Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are: > * multiplexing requests over a single TCP connection ("streams") > * canceling a single request without closing the TCP connection > * removing [head-of-line > blocking|https://http2.github.io/faq/#why-is-http2-multiplexed] > * header compression > Caveats: > * Jetty 9.3 is at M2, not released. > * Full Solr support for HTTP/2 would require more work than just upgrading > Jetty. The server configuration would need to change and a new HTTP client > ([Jetty's own > client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], > [Square's OkHttp|http://square.github.io/okhttp/], > [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need > to be selected and wired up. Perhaps this is worthy of a branch? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8360) ExternalFileField.getValueSource uses req.datadir but this.schema
[ https://issues.apache.org/jira/browse/SOLR-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-8360: -- Attachment: SOLR-8360-option2.patch SOLR-8360-option1.patch alternative patches against trunk > ExternalFileField.getValueSource uses req.datadir but this.schema > - > > Key: SOLR-8360 > URL: https://issues.apache.org/jira/browse/SOLR-8360 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke > Attachments: SOLR-8360-option1.patch, SOLR-8360-option2.patch > > > {{ExternalFileField.getValueSource(SchemaField field, QParser parser)}} has > available: > * datadir > ** parser.getReq().getCore().getDataDir() > ** this.schema.getResourceLoader().getDataDir() > * schema > ** parser.getReq().getSchema() > ** this.schema > {{ExternalFileField.getValueSource}} uses > {{parser.getReq().getCore().getDataDir()}} explicitly but implicitly > {{this.schema}} - should it use {{parser.getReq().getSchema()}} instead > (Option 1 patch)? Or if in practice actually req.datadir and this.datadir are > always the same could we stop using the parser argument (Option 2 patch (1 > line))? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8360) ExternalFileField.getValueSource uses req.datadir but this.schema
Christine Poerschke created SOLR-8360: - Summary: ExternalFileField.getValueSource uses req.datadir but this.schema Key: SOLR-8360 URL: https://issues.apache.org/jira/browse/SOLR-8360 Project: Solr Issue Type: Task Reporter: Christine Poerschke Assignee: Christine Poerschke {{ExternalFileField.getValueSource(SchemaField field, QParser parser)}} has available: * datadir ** parser.getReq().getCore().getDataDir() ** this.schema.getResourceLoader().getDataDir() * schema ** parser.getReq().getSchema() ** this.schema {{ExternalFileField.getValueSource}} uses {{parser.getReq().getCore().getDataDir()}} explicitly but implicitly {{this.schema}} - should it use {{parser.getReq().getSchema()}} instead (Option 1 patch)? Or if in practice actually req.datadir and this.datadir are always the same could we stop using the parser argument (Option 2 patch (1 line))? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3
[ https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036273#comment-15036273 ] Ishan Chattopadhyaya edited comment on SOLR-7339 at 12/2/15 6:07 PM: - Yesterday, I ran the same test using a debugger, and I put a breakpoint at SolrCore.postDecorateResponse's first line (and subsequently I hit resume at every instance of the breakpoint), then the test passes. I suspect there is some timing issue here. I couldn't find anything obvious. Even tried upgrading to the November release of the jetty-server artifacts, same results. was (Author: ichattopadhyaya): Yesterday, I ran the same test using a debugger, and I put a breakpoint at SolrCore.postDecorateResponse's first line (and subsequently I hit resume at every instance of the breakpoint), then the test passes. I suspect there is some timing issue here. I couldn't find anything obvious. > Upgrade Jetty from 9.2 to 9.3 > - > > Key: SOLR-7339 > URL: https://issues.apache.org/jira/browse/SOLR-7339 > Project: Solr > Issue Type: Improvement >Reporter: Gregg Donovan >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 6.0 > > Attachments: SOLR-7339.patch, SOLR-7339.patch, > SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, > SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng > > > Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor > SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] > and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx]. > Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are: > * multiplexing requests over a single TCP connection ("streams") > * canceling a single request without closing the TCP connection > * removing [head-of-line > blocking|https://http2.github.io/faq/#why-is-http2-multiplexed] > * header compression > Caveats: > * Jetty 9.3 is at M2, not released. > * Full Solr support for HTTP/2 would require more work than just upgrading > Jetty. The server configuration would need to change and a new HTTP client > ([Jetty's own > client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], > [Square's OkHttp|http://square.github.io/okhttp/], > [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need > to be selected and wired up. Perhaps this is worthy of a branch? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields
[ https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036165#comment-15036165 ] Ishan Chattopadhyaya commented on SOLR-8220: bq. I've updated the patch to now check for the schema version to be >=1.6 In the patch, I did some clumsy >= check for a float. Would've been better to just check for > 1.5. I'll fix it with the next patch. I'm working on bumping up the version of the schema for the out of the box configsets. > Read field from docValues for non stored fields > --- > > Key: SOLR-8220 > URL: https://issues.apache.org/jira/browse/SOLR-8220 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban > Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, > SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch > > > Many times a value will be both stored="true" and docValues="true" which > requires redundant data to be stored on disk. Since reading from docValues is > both efficient and a common practice (facets, analytics, streaming, etc), > reading values from docValues when a stored version of the field does not > exist would be a valuable disk usage optimization. > The only caveat with this that I can see would be for multiValued fields as > they would always be returned sorted in the docValues approach. I believe > this is a fair compromise. > I've done a rough implementation for this as a field transform, but I think > it should live closer to where stored fields are loaded in the > SolrIndexSearcher. > Two open questions/observations: > 1) There doesn't seem to be a standard way to read values for docValues, > facets, analytics, streaming, etc, all seem to be doing their own ways, > perhaps some of this logic should be centralized. > 2) What will the API behavior be? (Below is my proposed implementation) > Parameters for fl: > - fl="docValueField" > -- return field from docValue if the field is not stored and in docValues, > if the field is stored return it from stored fields > - fl="*" > -- return only stored fields > - fl="+" >-- return stored fields and docValue fields > 2a - would be easiest implementation and might be sufficient for a first > pass. 2b - is current behavior -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2798) Local Param parsing does not support multivalued params
[ https://issues.apache.org/jira/browse/SOLR-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-2798: --- Description: As noted by Demian on the solr-user mailing list, Local Param parsing seems to use a "last one wins" approach when parsing multivalued params. In this example, the value of "111" is completely ignored: {code} http://localhost:8983/solr/select?debug=query={!dismax%20bq=111%20bq=222}foo {code} was: As noted by Demian on the solr-user mailing list, Local Param parsing seems to use a "last one wins" approach when parsing multivalued params. In this example, the value of "111" is completely ignored... http://localhost:8983/solr/select?debug=query={!dismax%20bq=111%20bq=222}foo > Local Param parsing does not support multivalued params > --- > > Key: SOLR-2798 > URL: https://issues.apache.org/jira/browse/SOLR-2798 > Project: Solr > Issue Type: Bug >Reporter: Hoss Man >Assignee: Anshum Gupta > > As noted by Demian on the solr-user mailing list, Local Param parsing seems > to use a "last one wins" approach when parsing multivalued params. > In this example, the value of "111" is completely ignored: > {code} > http://localhost:8983/solr/select?debug=query={!dismax%20bq=111%20bq=222}foo > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8325) Declare loggers as private static final
[ https://issues.apache.org/jira/browse/SOLR-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob resolved SOLR-8325. - Resolution: Duplicate Fix Version/s: (was: Trunk) Taken care of as part of SOLR-8330 > Declare loggers as private static final > --- > > Key: SOLR-8325 > URL: https://issues.apache.org/jira/browse/SOLR-8325 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Attachments: SOLR-8325.patch > > > This will touch a lot of class files, but shouldn't affect more than a single > declaration in each. If a logger ends up being used outside of its declaring > class, then it will be easiest to expand the visibility on it for now. > Non-PSF loggers will be targeted in future improvements. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036117#comment-15036117 ] Mike Drob commented on SOLR-8359: - [~gerlowskija] - do you want to take a look at this? I didn't notice it missing when looking at the original patch, but it looks like a pretty straightforward fix. > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4116) Log Replay [recoveryExecutor-8-thread-1] - : java.io.EOFException
[ https://issues.apache.org/jira/browse/SOLR-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036160#comment-15036160 ] Mike Drob commented on SOLR-4116: - I saw similar stack traces recently with the same EOFException when running on HDFS. Looking at the javadoc on {{LogReader.next}} it suggests that we should return null in case of EOF, or is that only when we hit a "nice" EOF at a record boundary? Alternatively, if we're going to continue throwing the EOF to signal that a file is probably corrupt (or at least truncated), we should include the file name in the message. > Log Replay [recoveryExecutor-8-thread-1] - : java.io.EOFException > - > > Key: SOLR-4116 > URL: https://issues.apache.org/jira/browse/SOLR-4116 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.1 > Environment: 5.0.0.2012.11.28.10.42.06 > Debian Squeeze, Tomcat 6, Sun Java 6, 10 nodes, 10 shards, rep. factor 2. >Reporter: Markus Jelsma > Fix For: 5.4, Trunk > > > With SOLR-4032 fixed we see other issues when randomly taking down nodes > (nicely via tomcat restart) while indexing a few million web pages from > Hadoop. We do make sure that at least one node is up for a shard but due to > recovery issues it may not be live. > {code} > 2012-11-28 11:32:33,086 WARN [solr.update.UpdateLog] - > [recoveryExecutor-8-thread-1] - : Starting log replay > tlog{file=/opt/solr/cores/openindex_e/data/tlog/tlog.028 > refcount=2} active=false starting pos=0 > 2012-11-28 11:32:41,873 ERROR [solr.update.UpdateLog] - > [recoveryExecutor-8-thread-1] - : java.io.EOFException > at > org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:151) > at > org.apache.solr.common.util.JavaBinCodec.readStr(JavaBinCodec.java:479) > at > org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:176) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:374) > at > org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225) > at > org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451) > at > org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182) > at > org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:618) > at > org.apache.solr.update.UpdateLog$LogReplayer.doReplay(UpdateLog.java:1198) > at > org.apache.solr.update.UpdateLog$LogReplayer.run(UpdateLog.java:1143) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036275#comment-15036275 ] Mike Drob commented on SOLR-8359: - {{PeerSync}} also has this, but no other classes use the cached debug value there. > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4159) Code review before 4.0 release
[ https://issues.apache.org/jira/browse/LUCENE-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tommaso Teofili resolved LUCENE-4159. - Resolution: Invalid Fix Version/s: (was: 4.9) (was: Trunk) 4.0? resolving as invalid :) > Code review before 4.0 release > -- > > Key: LUCENE-4159 > URL: https://issues.apache.org/jira/browse/LUCENE-4159 > Project: Lucene - Core > Issue Type: Task >Reporter: Tommaso Teofili >Priority: Minor > > Before the 4.0 release I think it makes sense to plan for a (Lucene and Solr) > comprehensive code review in order to improve APIs, performance and code > style. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)
[ https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036156#comment-15036156 ] Hoss Man commented on LUCENE-6914: -- i wrote it thinking it would be helpful to first demonstrate what worked (x, or space, or some non candidate char in between digits as a "gap"), but then when you replace those "gaps" when empty space it breaks. now that we know the problem and can trivially reproduce with randomized tests, i'm not sure it's really relevant -- but you could always just clone those asserts and do a bunch of different varieties for the "gap" (single space, x, some wide supplemental character that isn't a digit, etc...) > DecimalDigitFilter skips characters in some cases (supplemental?) > - > > Key: LUCENE-6914 > URL: https://issues.apache.org/jira/browse/LUCENE-6914 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.4 >Reporter: Hoss Man > Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch > > > Found this while writing up the solr ref guide for DecimalDigitFilter. > With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, > double struck 9, 8, double struck 4) add some non-decimal characters in > between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output > ("1x9x8x4"). This doesn't affect all decimal characters though, as evident > by the existing test cases. > Perhaps this is an off by one bug in the "if the original was supplementary, > shrink the string" code path? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036329#comment-15036329 ] Mark Miller edited comment on SOLR-8353 at 12/2/15 6:34 PM: bq. I can see it easily leading us down a road to developer sloppiness if this is allowed. One, it's not allowed. (eg it's just a local option - you can't make it a default or official jenkins override) Two, every developer can make all kinds of optional changes to the build - they can even override the version via sys prop! But they all happen to know that if they change the build params, they can't really expect the official build to pass anymore and if they forget, like other deviations, jenkins will admonish them. was (Author: markrmil...@gmail.com): bq. I can see it easily leading us down a road to developer sloppiness if this is allowed. One, it's not allowed. Two, every developer can make all kinds of optional changes to the build - they can even override the version via sys prop! But they all happen to know that if they change the build params, they can't really expect the official build to pass anymore and if they forget, like other deviations, jenkins will admonish them. > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan > Attachments: SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036329#comment-15036329 ] Mark Miller commented on SOLR-8353: --- bq. I can see it easily leading us down a road to developer sloppiness if this is allowed. One, it's not allowed. Two, every developer can make all kinds of optional changes to the build - they can even override the version via sys prop! But they all happen to know that if they change the build params, they can't really expect the official build to pass anymore and if they forget, like other deviations, jenkins will admonish them. > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan > Attachments: SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3
[ https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036273#comment-15036273 ] Ishan Chattopadhyaya commented on SOLR-7339: Yesterday, I ran the same test using a debugger, and I put a breakpoint at SolrCore.postDecorateResponse's first line (and subsequently I hit resume at every instance of the breakpoint), then the test passes. I suspect there is some timing issue here. I couldn't find anything obvious. > Upgrade Jetty from 9.2 to 9.3 > - > > Key: SOLR-7339 > URL: https://issues.apache.org/jira/browse/SOLR-7339 > Project: Solr > Issue Type: Improvement >Reporter: Gregg Donovan >Assignee: Shalin Shekhar Mangar > Fix For: Trunk, 6.0 > > Attachments: SOLR-7339.patch, SOLR-7339.patch, > SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, > SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng > > > Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor > SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] > and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx]. > Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are: > * multiplexing requests over a single TCP connection ("streams") > * canceling a single request without closing the TCP connection > * removing [head-of-line > blocking|https://http2.github.io/faq/#why-is-http2-multiplexed] > * header compression > Caveats: > * Jetty 9.3 is at M2, not released. > * Full Solr support for HTTP/2 would require more work than just upgrading > Jetty. The server configuration would need to change and a new HTTP client > ([Jetty's own > client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], > [Square's OkHttp|http://square.github.io/okhttp/], > [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need > to be selected and wired up. Perhaps this is worthy of a branch? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8345) Wrong query parsing
[ https://issues.apache.org/jira/browse/SOLR-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036274#comment-15036274 ] Hoss Man commented on SOLR-8345: to clarify the specific logic taking place here... * {{\( ... \)}} -- parens indicate grouping, typically for the purpose of constructing a multiclause sub-BooleanQuery. * {{myfield:}} is syntax for indicating that "myfield" should override the default field * {{myfield:\( ... \)}} indicates that when parsing the contents of parens, myfield overrides the defualt field for _all_ clauses found in the group. * {{-foo}} is the syntax for a negated clause * {{myfield:\(-1\)}} constructs a negated clause for the term {{myfield:1}}, since it is the only cluase in the grooup no sub-BooleanQuery is created. since it is the only clause in the top leval query, and it is negated, Solr adds the neccessary {{\*:\*}} this is really no different then something like {{myfield:\(+foo -bar yaz\)}} .. the prefix "\+" and "\-" characters are *always* treated as MUST and MUST_NOT modifiers unless they are quoted or escaped. ie... {noformat} myfield:("+foo" \-bar yaz) {noformat} > Wrong query parsing > --- > > Key: SOLR-8345 > URL: https://issues.apache.org/jira/browse/SOLR-8345 > Project: Solr > Issue Type: Bug >Reporter: Saar Carmi >Priority: Minor > > When sending a query for a numeirc field with =myfield:(-1) the query is > parsed as -myfield:1 > I would expect it to be either parsed as myfield:"-1" or an exception to be > returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Solr Ref Guide for 5.4
I'll offer to manage the Ref Guide release for 5.4. Uwe, will you update the CWIKI javadoc links to point to the 5_4_0 paths? As described here: https://cwiki.apache.org/confluence/display/solr/Internal+-+How+Javadoc+Links+Work#Internal-HowJavadocLinksWork-GuidePublishing:ChangingVersionsInAllLinks . Hoss and others have already done a lot of work organizing the TODO list and getting most of the new features & changes documented, but we always miss a few items in each release and have a backlog of smaller changes. Please take a look and help out where you can: https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List. I'd like to get an RC up as soon as there is an RC for Lucene/Solr, and we're pretty close to having one of those. If you do take on some edits, please communicate (in the TODO list or reply to this thread) so we know where you're at if a Lucene/Solr RC is released. I've put up a pre-RC of the Ref Guide at http://people.apache.org/~ctargett/SolrRefGuide-drafts/ApacheSolrRefGuide-pre5.4.pdf for people to take a look in advance of an RC. (I see one problem already that I will work on fixing, which is the nav that appears in the web UI at the bottom of every page is now showing up in the PDF. That is supposed to be hidden, so something has changed in Confluence between releases.) Thanks, Cassandra
[jira] [Updated] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles
[ https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6908: --- Attachment: LUCENE-6908.patch Thanks [~mikemccand]! Fixed the patch to include GeoRelationUtils and changes to SloppyMath. > TestGeoUtils.testGeoRelations is buggy with irregular rectangles > > > Key: LUCENE-6908 > URL: https://issues.apache.org/jira/browse/LUCENE-6908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize > Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch > > > The {{.testGeoRelations}} method doesn't exactly test the behavior of > GeoPoint*Query as its using the BKD split technique (instead of quad cell > division) to divide the space on each pass. For "large" distance queries this > can create a lot of irregular rectangles producing large radial distortion > error when using the cartesian approximation methods provided by > {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian > approximation methods on irregular rectangles without having to cut over to > an expensive oblate geometry approach. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)
[ https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036156#comment-15036156 ] Hoss Man edited comment on LUCENE-6914 at 12/2/15 5:29 PM: --- i wrote it thinking it would be helpful to first demonstrate what worked (x, or space, or some non candidate char in between digits as a "gap"), but then when you replace those "gaps" when empty -space- strings it breaks. now that we know the problem and can trivially reproduce with randomized tests, i'm not sure it's really relevant -- but you could always just clone those asserts and do a bunch of different varieties for the "gap" (single space, x, some wide supplemental character that isn't a digit, etc...) was (Author: hossman): i wrote it thinking it would be helpful to first demonstrate what worked (x, or space, or some non candidate char in between digits as a "gap"), but then when you replace those "gaps" when empty space it breaks. now that we know the problem and can trivially reproduce with randomized tests, i'm not sure it's really relevant -- but you could always just clone those asserts and do a bunch of different varieties for the "gap" (single space, x, some wide supplemental character that isn't a digit, etc...) > DecimalDigitFilter skips characters in some cases (supplemental?) > - > > Key: LUCENE-6914 > URL: https://issues.apache.org/jira/browse/LUCENE-6914 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.4 >Reporter: Hoss Man > Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch > > > Found this while writing up the solr ref guide for DecimalDigitFilter. > With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, > double struck 9, 8, double struck 4) add some non-decimal characters in > between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output > ("1x9x8x4"). This doesn't affect all decimal characters though, as evident > by the existing test cases. > Perhaps this is an off by one bug in the "if the original was supplementary, > shrink the string" code path? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036322#comment-15036322 ] Mark Miller commented on SOLR-8353: --- This is desirable for the same reason we added the other skip options - some of us have a fork of the build that has to deal with changing dependencies. Our build systems all get to work according to their needs - which is a good thing. None of these options change the experience for our Jenkins build machines, or a devs checkout. The downside doesn't really exist. > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan > Attachments: SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Ref Guide for 5.4
FWIW, the problem I mentioned in the pre-RC is fixed so those nav elements will disappear when I make an RC of the guide. On Wed, Dec 2, 2015 at 12:26 PM, Cassandra Targettwrote: > I'll offer to manage the Ref Guide release for 5.4. > > Uwe, will you update the CWIKI javadoc links to point to the 5_4_0 paths? > As described here: > https://cwiki.apache.org/confluence/display/solr/Internal+-+How+Javadoc+Links+Work#Internal-HowJavadocLinksWork-GuidePublishing:ChangingVersionsInAllLinks > . > > Hoss and others have already done a lot of work organizing the TODO list > and getting most of the new features & changes documented, but we always > miss a few items in each release and have a backlog of smaller changes. > Please take a look and help out where you can: > https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List. > > I'd like to get an RC up as soon as there is an RC for Lucene/Solr, and > we're pretty close to having one of those. If you do take on some edits, > please communicate (in the TODO list or reply to this thread) so we know > where you're at if a Lucene/Solr RC is released. > > I've put up a pre-RC of the Ref Guide at > http://people.apache.org/~ctargett/SolrRefGuide-drafts/ApacheSolrRefGuide-pre5.4.pdf > for people to take a look in advance of an RC. > > (I see one problem already that I will work on fixing, which is the nav > that appears in the web UI at the bottom of every page is now showing up in > the PDF. That is supposed to be hidden, so something has changed in > Confluence between releases.) > > Thanks, > Cassandra > >
[jira] [Commented] (SOLR-8225) Leader should send update requests to replicas in recovery asynchronously
[ https://issues.apache.org/jira/browse/SOLR-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036443#comment-15036443 ] Ayon Sinha commented on SOLR-8225: -- This problem is becoming a deal-breaker for any Solr cluster. The larger the cluster becomes, the higher is the likelihood of at least one replica being unhealthy/slow/recovering. Right now, as it stands, indexing comes to a grinding halt when one or more replicas are recovering. To begin this fix, we MUST at least add a setting where leader does not send the update to a recovering replica at all. It should get that update from wherever its recovering from. [~yo...@apache.org] Can you please comment on the best way to handle this, and we can take this on and submit the patch? This patch with https://issues.apache.org/jira/browse/SOLR-8227 needs to be considered together. > Leader should send update requests to replicas in recovery asynchronously > - > > Key: SOLR-8225 > URL: https://issues.apache.org/jira/browse/SOLR-8225 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Timothy Potter > > When a replica goes into recovery, the leader still sends docs to that > replica while it is recovering. What I'm seeing is that the recovering node > is still slow to respond to the leader (at least slower than the healthy > replicas). Thus it would be good if the leader could send the updates to the > recovering replica asynchronously, i.e. the leader will block as it does > today when forwarding updates to healthy / active replicas, but send updates > to recovering replicas async, thus preventing the whole update request from > being slowed down by a potentially degraded. > FWIW - I've actually seen this occur in an environment that has more than 3 > replicas per shard. One of the replicas went into recovery and then was much > slower to handle requests than the healthy replicas, but the leader had to > wait for the slowest replica. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6917) Move NumericField out of core to backwards-codecs
Michael McCandless created LUCENE-6917: -- Summary: Move NumericField out of core to backwards-codecs Key: LUCENE-6917 URL: https://issues.apache.org/jira/browse/LUCENE-6917 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 6.0 DimensionalValues seems to be better across the board (indexing time, indexing size, search-speed, search-time heap required) than NumericField, at least in my testing so far. I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename with {{Legacy}} prefix? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8349) Allow sharing of large in memory data structures across cores
[ https://issues.apache.org/jira/browse/SOLR-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031082#comment-15031082 ] Gus Heck edited comment on SOLR-8349 at 12/2/15 7:53 PM: - Patch implementing general feature (on 6.x). Patch for SOLR-3443 to be attached there shortly. was (Author: gus_heck): Patch implementing general feature. Patch for SOLR-3443 to be attached there shortly. > Allow sharing of large in memory data structures across cores > - > > Key: SOLR-8349 > URL: https://issues.apache.org/jira/browse/SOLR-8349 > Project: Solr > Issue Type: Improvement > Components: Server >Affects Versions: 5.3 >Reporter: Gus Heck > Attachments: SOLR-8349.patch > > > In some cases search components or analysis classes may utilize a large > dictionary or other in-memory structure. When multiple cores are loaded with > identical configurations utilizing this large in memory structure, each core > holds it's own copy in memory. This has been noted in the past and a specific > case reported in SOLR-3443. This patch provides a generalized capability, and > if accepted, this capability will then be used to fix SOLR-3443. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036483#comment-15036483 ] Mike Drob commented on SOLR-8359: - Not sure it makes sense to always make these cached boolean {{private static final}}. {{UpdateLog.preSoftCommit}} would refresh the values, I think {{HdfsUpdateLog}} now misses out on that, haven't confirmed though. Logging levels can change dynamically via the Solr UI or through file watchdogs, depending on the impl used. Also I don't see changes to {{CdcrTransactionLog}} like you mentioned - was that a false positive in your previous comment? > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > Attachments: SOLR-8359.patch > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan reassigned SOLR-8353: Assignee: Gregory Chanan > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Attachments: SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6917) Move NumericField out of core to backwards-codecs
[ https://issues.apache.org/jira/browse/LUCENE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6917: --- Attachment: LUCENE-6917.patch This is mostly a "rote" rename, except there was some hair involved because I moved {{precisionStep}} and {{NumericType}} out of core's {{FieldType}} and down into the {{XXXField}} themselves. This also required adding some deps on modules to {{backward-codecs}}, though I was able to cutover at least {{lucene/demo}}'s usage to dimensional values ... > Move NumericField out of core to backwards-codecs > - > > Key: LUCENE-6917 > URL: https://issues.apache.org/jira/browse/LUCENE-6917 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 6.0 > > Attachments: LUCENE-6917.patch > > > DimensionalValues seems to be better across the board (indexing time, > indexing size, search-speed, search-time heap required) than NumericField, at > least in my testing so far. > I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, > {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename > with {{Legacy}} prefix? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6595) Improve error response in case distributed collection cmd fails
[ https://issues.apache.org/jira/browse/SOLR-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036518#comment-15036518 ] mugeesh commented on SOLR-6595: --- above conversation nobody tell clearly how to solve it, I am getting same error in solr-5.3. Provide the exact command for creating create colllection/core. > Improve error response in case distributed collection cmd fails > --- > > Key: SOLR-6595 > URL: https://issues.apache.org/jira/browse/SOLR-6595 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.10 > Environment: SolrCloud with Client SSL >Reporter: Sindre Fiskaa >Priority: Minor > > Followed the description > https://cwiki.apache.org/confluence/display/solr/Enabling+SSL and generated a > self signed key pair. Configured a few solr-nodes and used the collection api > to crate a new collection. -I get error message when specify the nodes with > the createNodeSet param. When I don't use createNodeSet param the collection > gets created without error on random nodes. Could this be a bug related to > the createNodeSet param?- *Update: It failed due to what turned out to be > invalid client certificate on the overseer, and returned the following > response:* > {code:xml} > > 0 name="QTime">185 > > org.apache.solr.client.solrj.SolrServerException:IOException occured > when talking to server at: https://vt-searchln04:443/solr > > > {code} > *Update: Three problems:* > # Status=0 when the cmd did not succeed (only ZK was updated, but cores not > created due to failing to connect to shard nodes to talk to core admin API). > # The error printed does not tell which action failed. Would be helpful to > either get the msg from the original exception or at least some message > saying "Failed to create core, see log on Overseer > # State of collection is not clean since it exists as far as ZK is concerned > but cores not created. Thus retrying the CREATECOLLECTION cmd would fail. > Should Overseer detect error in distributed cmds and rollback changes already > made in ZK? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Ref Guide for 5.4
Will do in an hour! Uwe Am 2. Dezember 2015 19:26:07 MEZ, schrieb Cassandra Targett: >I'll offer to manage the Ref Guide release for 5.4. > >Uwe, will you update the CWIKI javadoc links to point to the 5_4_0 >paths? >As described here: >https://cwiki.apache.org/confluence/display/solr/Internal+-+How+Javadoc+Links+Work#Internal-HowJavadocLinksWork-GuidePublishing:ChangingVersionsInAllLinks >. > >Hoss and others have already done a lot of work organizing the TODO >list >and getting most of the new features & changes documented, but we >always >miss a few items in each release and have a backlog of smaller changes. >Please take a look and help out where you can: >https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List. > >I'd like to get an RC up as soon as there is an RC for Lucene/Solr, and >we're pretty close to having one of those. If you do take on some >edits, >please communicate (in the TODO list or reply to this thread) so we >know >where you're at if a Lucene/Solr RC is released. > >I've put up a pre-RC of the Ref Guide at >http://people.apache.org/~ctargett/SolrRefGuide-drafts/ApacheSolrRefGuide-pre5.4.pdf >for people to take a look in advance of an RC. > >(I see one problem already that I will work on fixing, which is the nav >that appears in the web UI at the bottom of every page is now showing >up in >the PDF. That is supposed to be hidden, so something has changed in >Confluence between releases.) > >Thanks, >Cassandra -- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de
[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036351#comment-15036351 ] Mark Miller commented on SOLR-8353: --- Patch looks good. A couple weeks ago I tried to do this via simple regex exclusion in the ant build file, but quickly learned it was going to take this route and punted. > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan > Attachments: SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5439 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/ Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.handler.component.SpellCheckComponentTest.test Error Message: List size mismatch @ spellcheck/suggestions Stack Trace: java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions at __randomizedtesting.SeedInfo.seed([110D525A21D16B1:8944EAFF0CE17B49]:0) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:837) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:784) at org.apache.solr.handler.component.SpellCheckComponentTest.test(SpellCheckComponentTest.java:96) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10826 lines...] [junit4] Suite: org.apache.solr.handler.component.SpellCheckComponentTest [junit4] 2> Creating dataDir:
[JENKINS] Lucene-Solr-NightlyTests-5.4 - Build # 7 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.4/7/ 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Wed Dec 02 15:04:23 BRT 2015 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Wed Dec 02 15:04:23 BRT 2015 at __randomizedtesting.SeedInfo.seed([8CB3B583F8651A0A:5718B545FD4D73B9]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1415) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:767) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10744 lines...] [junit4] Suite: org.apache.solr.handler.TestReplicationHandler [junit4] 2> Creating dataDir:
[jira] [Assigned] (SOLR-2556) Spellcheck component not returned with numeric queries
[ https://issues.apache.org/jira/browse/SOLR-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Dyer reassigned SOLR-2556: Assignee: James Dyer > Spellcheck component not returned with numeric queries > -- > > Key: SOLR-2556 > URL: https://issues.apache.org/jira/browse/SOLR-2556 > Project: Solr > Issue Type: Bug > Components: spellchecker >Affects Versions: 3.1 >Reporter: Markus Jelsma >Assignee: James Dyer > Attachments: SOLR-2556.patch > > > The spell check component's output is not written when sending queries that > consist of numbers only. Clients depending on the availability of the > spellcheck output need to check if the output is actually there. > See also: > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201105.mbox/%3c201105301607.41956.markus.jel...@openindex.io%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2556) Spellcheck component not returned with numeric queries
[ https://issues.apache.org/jira/browse/SOLR-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Dyer updated SOLR-2556: - Attachment: SOLR-2556.patch Here is a patch with just failing unit tests. I think the best way to fix this is to make SpellingQueryConverter more robust to recognize digits as terms subject to corrections / suggestions. I do think the complete absence of the spellcheck section on the response was probably by design. If there are no terms in the query that can be corrected, it leaves it off altogether. > Spellcheck component not returned with numeric queries > -- > > Key: SOLR-2556 > URL: https://issues.apache.org/jira/browse/SOLR-2556 > Project: Solr > Issue Type: Bug > Components: spellchecker >Affects Versions: 3.1 >Reporter: Markus Jelsma > Attachments: SOLR-2556.patch > > > The spell check component's output is not written when sending queries that > consist of numbers only. Clients depending on the availability of the > spellcheck output need to check if the output is actually there. > See also: > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201105.mbox/%3c201105301607.41956.markus.jel...@openindex.io%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-8359: -- Attachment: SOLR-8359.patch Tests passed; here's the patch. > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > Attachments: SOLR-8359.patch > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_66) - Build # 14803 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14803/ Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! ClusterState: { "collection1":{ "replicationFactor":"1", "shards":{ "shard1":{ "range":"8000-", "state":"active", "replicas":{"core_node2":{ "core":"collection1", "base_url":"http://127.0.0.1:34836/hf;, "node_name":"127.0.0.1:34836_hf", "state":"active", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"collection1", "base_url":"http://127.0.0.1:39538/hf;, "node_name":"127.0.0.1:39538_hf", "state":"active", "leader":"true"}, "core_node3":{ "core":"collection1", "base_url":"http://127.0.0.1:46331/hf;, "node_name":"127.0.0.1:46331_hf", "state":"active", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "autoCreated":"true"}, "control_collection":{ "replicationFactor":"1", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{"core_node1":{ "core":"collection1", "base_url":"http://127.0.0.1:54933/hf;, "node_name":"127.0.0.1:54933_hf", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "autoCreated":"true"}, "c8n_1x2":{ "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"c8n_1x2_shard1_replica2", "base_url":"http://127.0.0.1:54933/hf;, "node_name":"127.0.0.1:54933_hf", "state":"recovering"}, "core_node2":{ "core":"c8n_1x2_shard1_replica1", "base_url":"http://127.0.0.1:34836/hf;, "node_name":"127.0.0.1:34836_hf", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"}, "collMinRf_1x3":{ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"collMinRf_1x3_shard1_replica2", "base_url":"http://127.0.0.1:46331/hf;, "node_name":"127.0.0.1:46331_hf", "state":"active", "leader":"true"}, "core_node2":{ "core":"collMinRf_1x3_shard1_replica3", "base_url":"http://127.0.0.1:39538/hf;, "node_name":"127.0.0.1:39538_hf", "state":"active"}, "core_node3":{ "core":"collMinRf_1x3_shard1_replica1", "base_url":"http://127.0.0.1:54933/hf;, "node_name":"127.0.0.1:54933_hf", "state":"active", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"}} Stack Trace: java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! ClusterState: { "collection1":{ "replicationFactor":"1", "shards":{ "shard1":{ "range":"8000-", "state":"active", "replicas":{"core_node2":{ "core":"collection1", "base_url":"http://127.0.0.1:34836/hf;, "node_name":"127.0.0.1:34836_hf", "state":"active", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"collection1", "base_url":"http://127.0.0.1:39538/hf;, "node_name":"127.0.0.1:39538_hf", "state":"active", "leader":"true"}, "core_node3":{ "core":"collection1", "base_url":"http://127.0.0.1:46331/hf;, "node_name":"127.0.0.1:46331_hf", "state":"active", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "autoCreated":"true"}, "control_collection":{ "replicationFactor":"1", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{"core_node1":{ "core":"collection1", "base_url":"http://127.0.0.1:54933/hf;, "node_name":"127.0.0.1:54933_hf", "state":"active", "leader":"true", "router":{"name":"compositeId"},
[jira] [Created] (SOLR-8361) Failed to create collection in Solrcloud
mugeesh created SOLR-8361: - Summary: Failed to create collection in Solrcloud Key: SOLR-8361 URL: https://issues.apache.org/jira/browse/SOLR-8361 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 5.3 Reporter: mugeesh Fix For: 5.3 Hi, I am using 3 server ,solr1,solr2a sn solr3 I have setup 3 instance of zookeeper in server solr 2 when i try to create 1 shards and 2 replica, it work find. while i am try to create core with 1 shards with 3 replication,using this command bin/solr create -c abc -n abcr -shards 1 -replicationFactor 3 I am getting below error, ERROR: Failed to create collection 'abc' due to: org.apache.solr.client.solrj.SolrServerException:IOException occured when talking to server at: http://xx.yyy.zz:8985/solr (server -solr 3) solr 3:) i did start this server using this command bin/solr start -cloud -p 8985 -s "example/cloud/node1/solr" -z solr2:2181,solr2:2182,solr3:2183 what is the issue i unable to solved it. I thought it is a bug in solr 5.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated SOLR-8353: - Attachment: SOLR-8353.patch Thanks for taking a look, Mark. New patch fixes one of the log messages. > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Attachments: SOLR-8353.patch, SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8308) Core gets inaccessible after RENAME operation with special characters
[ https://issues.apache.org/jira/browse/SOLR-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035491#comment-15035491 ] Alan Woodward commented on SOLR-8308: - Hi Erik, could you move the name checks to a separate validateCoreName() method, and call it from both createCore() and rename(), rather than in registerCore()? At the moment if the validation fails after creation there's still a SolrCore object in the container taking up resources, it's just not registered in the cluster state. > Core gets inaccessible after RENAME operation with special characters > - > > Key: SOLR-8308 > URL: https://issues.apache.org/jira/browse/SOLR-8308 > Project: Solr > Issue Type: Bug >Reporter: Adam Johnson > Attachments: SOLR-8308.patch, SOLR-8308.patch > > > You can rename a core using the following modified URL > https://SOLR:PORT/solr/admin/cores?wt=json=false=RENAME=test_app_shared2_replica2=%3Csvg+onload%3Dalert(1)%3E&_=1445468005152. > The core becomes inaccessible / unusable. There should be more form > validation to the core name assignment -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035526#comment-15035526 ] Uwe Schindler commented on SOLR-8330: - +1 Thanks! > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5437 - Still Failing!
I committed a fix: it was a fun failure, where the randomized merge policy it got had such a tiny maxMergedSegmentMB that no merging ever took place and so the test never saw the merge exceptions it was waiting for! Mike McCandless http://blog.mikemccandless.com On Wed, Dec 2, 2015 at 4:15 AM, Michael McCandlesswrote: > Thanks Dawid, I'll dig ... > > Mike McCandless > > http://blog.mikemccandless.com > > > On Wed, Dec 2, 2015 at 2:48 AM, Dawid Weiss wrote: >> This reproduces. Looks like it picks CheapBastard codec and it doesn't >> play too nice with the tests? >> >> Dawid >> >> On Wed, Dec 2, 2015 at 1:31 AM, Policeman Jenkins Server >> wrote: >>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5437/ >>> Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC >>> >>> 2 tests failed. >>> FAILED: >>> org.apache.lucene.index.TestIndexWriterExceptions.testMergeExceptionIsTragic >>> >>> Error Message: >>> Test abandoned because suite timeout was reached. >>> >>> Stack Trace: >>> java.lang.Exception: Test abandoned because suite timeout was reached. >>> at __randomizedtesting.SeedInfo.seed([8DA8EC6B89D77C2E]:0) >>> >>> >>> FAILED: >>> junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterExceptions >>> >>> Error Message: >>> Suite timeout exceeded (>= 720 msec). >>> >>> Stack Trace: >>> java.lang.Exception: Suite timeout exceeded (>= 720 msec). >>> at __randomizedtesting.SeedInfo.seed([8DA8EC6B89D77C2E]:0) >>> >>> >>> >>> >>> Build Log: >>> [...truncated 1819 lines...] >>>[junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions >>>[junit4] 2> Noll 02, 2015 2:31:19 A.M. >>> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate >>>[junit4] 2> WARNING: Suite execution timed out: >>> org.apache.lucene.index.TestIndexWriterExceptions >>>[junit4] 2>1) Thread[id=1489, >>> name=TEST-TestIndexWriterExceptions.testMergeExceptionIsTragic-seed#[8DA8EC6B89D77C2E], >>> state=RUNNABLE, group=TGRP-TestIndexWriterExceptions] >>>[junit4] 2> at java.util.HashMap.hash(HashMap.java:338) >>>[junit4] 2> at java.util.HashMap.put(HashMap.java:611) >>>[junit4] 2> at java.util.HashSet.add(HashSet.java:219) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter$ReaderPool.noDups(IndexWriter.java:679) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:668) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter.numDeletedDocs(IndexWriter.java:694) >>>[junit4] 2> at >>> org.apache.lucene.index.MergePolicy.size(MergePolicy.java:451) >>>[junit4] 2> at >>> org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:249) >>>[junit4] 2> at >>> org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:238) >>>[junit4] 2> at java.util.TimSort.mergeHi(TimSort.java:837) >>>[junit4] 2> at java.util.TimSort.mergeAt(TimSort.java:516) >>>[junit4] 2> at >>> java.util.TimSort.mergeForceCollapse(TimSort.java:457) >>>[junit4] 2> at java.util.TimSort.sort(TimSort.java:254) >>>[junit4] 2> at java.util.Arrays.sort(Arrays.java:1512) >>>[junit4] 2> at java.util.ArrayList.sort(ArrayList.java:1454) >>>[junit4] 2> at java.util.Collections.sort(Collections.java:175) >>>[junit4] 2> at >>> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:292) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:1952) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1916) >>>[junit4] 2> at >>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:454) >>>[junit4] 2> at >>> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:86) >>>[junit4] 2> at >>> org.apache.lucene.index.TestIndexWriterExceptions.testMergeExceptionIsTragic(TestIndexWriterExceptions.java:2340) >>>[junit4] 2> at >>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>[junit4] 2> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>>[junit4] 2> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>[junit4] 2> at java.lang.reflect.Method.invoke(Method.java:497) >>>[junit4] 2> at >>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) >>>[junit4] 2> at >>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) >>>
[jira] [Commented] (SOLR-8308) XSS vulnerability
[ https://issues.apache.org/jira/browse/SOLR-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035456#comment-15035456 ] Uwe Schindler commented on SOLR-8308: - Hi, we should rename this issue (summary title), so it does not mention incorrect XSS. This is no XSS issue, so we should be careful with alerting people on the release. There is no security related stuff involved. That the core gets inaccessible ist the real bug. > XSS vulnerability > - > > Key: SOLR-8308 > URL: https://issues.apache.org/jira/browse/SOLR-8308 > Project: Solr > Issue Type: Bug >Reporter: Adam Johnson > Attachments: SOLR-8308.patch, SOLR-8308.patch > > > You can rename a core using the following modified URL > https://SOLR:PORT/solr/admin/cores?wt=json=false=RENAME=test_app_shared2_replica2=%3Csvg+onload%3Dalert(1)%3E&_=1445468005152. > The core becomes inaccessible / unusable. There should be more form > validation to the core name assignment -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8308) Core gets inaccessible after RENAME operation with special characters
[ https://issues.apache.org/jira/browse/SOLR-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-8308: Summary: Core gets inaccessible after RENAME operation with special characters (was: XSS vulnerability) > Core gets inaccessible after RENAME operation with special characters > - > > Key: SOLR-8308 > URL: https://issues.apache.org/jira/browse/SOLR-8308 > Project: Solr > Issue Type: Bug >Reporter: Adam Johnson > Attachments: SOLR-8308.patch, SOLR-8308.patch > > > You can rename a core using the following modified URL > https://SOLR:PORT/solr/admin/cores?wt=json=false=RENAME=test_app_shared2_replica2=%3Csvg+onload%3Dalert(1)%3E&_=1445468005152. > The core becomes inaccessible / unusable. There should be more form > validation to the core name assignment -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 676 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/676/ 1 tests failed. FAILED: org.apache.solr.cloud.TestDistribDocBasedVersion.test Error Message: expected:<409> but was:<500> Stack Trace: java.lang.AssertionError: expected:<409> but was:<500> at __randomizedtesting.SeedInfo.seed([D0D0A38C017B0182:58849C56AF876C7A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestDistribDocBasedVersion.vaddFail(TestDistribDocBasedVersion.java:266) at org.apache.solr.cloud.TestDistribDocBasedVersion.doTestHardFail(TestDistribDocBasedVersion.java:131) at org.apache.solr.cloud.TestDistribDocBasedVersion.doTestHardFail(TestDistribDocBasedVersion.java:124) at org.apache.solr.cloud.TestDistribDocBasedVersion.test(TestDistribDocBasedVersion.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Updated] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-8330: --- Attachment: SOLR-8330.patch Patch with a few more fixes that I came across while running ant clean, test, validate, and precommit. The tests pass and all looks good. I'll commit this now. Might have to deal with branch_5x merge next. > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5437 - Still Failing!
Thanks Dawid, I'll dig ... Mike McCandless http://blog.mikemccandless.com On Wed, Dec 2, 2015 at 2:48 AM, Dawid Weisswrote: > This reproduces. Looks like it picks CheapBastard codec and it doesn't > play too nice with the tests? > > Dawid > > On Wed, Dec 2, 2015 at 1:31 AM, Policeman Jenkins Server > wrote: >> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5437/ >> Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC >> >> 2 tests failed. >> FAILED: >> org.apache.lucene.index.TestIndexWriterExceptions.testMergeExceptionIsTragic >> >> Error Message: >> Test abandoned because suite timeout was reached. >> >> Stack Trace: >> java.lang.Exception: Test abandoned because suite timeout was reached. >> at __randomizedtesting.SeedInfo.seed([8DA8EC6B89D77C2E]:0) >> >> >> FAILED: >> junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterExceptions >> >> Error Message: >> Suite timeout exceeded (>= 720 msec). >> >> Stack Trace: >> java.lang.Exception: Suite timeout exceeded (>= 720 msec). >> at __randomizedtesting.SeedInfo.seed([8DA8EC6B89D77C2E]:0) >> >> >> >> >> Build Log: >> [...truncated 1819 lines...] >>[junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions >>[junit4] 2> Noll 02, 2015 2:31:19 A.M. >> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate >>[junit4] 2> WARNING: Suite execution timed out: >> org.apache.lucene.index.TestIndexWriterExceptions >>[junit4] 2>1) Thread[id=1489, >> name=TEST-TestIndexWriterExceptions.testMergeExceptionIsTragic-seed#[8DA8EC6B89D77C2E], >> state=RUNNABLE, group=TGRP-TestIndexWriterExceptions] >>[junit4] 2> at java.util.HashMap.hash(HashMap.java:338) >>[junit4] 2> at java.util.HashMap.put(HashMap.java:611) >>[junit4] 2> at java.util.HashSet.add(HashSet.java:219) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter$ReaderPool.noDups(IndexWriter.java:679) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:668) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter.numDeletedDocs(IndexWriter.java:694) >>[junit4] 2> at >> org.apache.lucene.index.MergePolicy.size(MergePolicy.java:451) >>[junit4] 2> at >> org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:249) >>[junit4] 2> at >> org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:238) >>[junit4] 2> at java.util.TimSort.mergeHi(TimSort.java:837) >>[junit4] 2> at java.util.TimSort.mergeAt(TimSort.java:516) >>[junit4] 2> at >> java.util.TimSort.mergeForceCollapse(TimSort.java:457) >>[junit4] 2> at java.util.TimSort.sort(TimSort.java:254) >>[junit4] 2> at java.util.Arrays.sort(Arrays.java:1512) >>[junit4] 2> at java.util.ArrayList.sort(ArrayList.java:1454) >>[junit4] 2> at java.util.Collections.sort(Collections.java:175) >>[junit4] 2> at >> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:292) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:1952) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1916) >>[junit4] 2> at >> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:454) >>[junit4] 2> at >> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:86) >>[junit4] 2> at >> org.apache.lucene.index.TestIndexWriterExceptions.testMergeExceptionIsTragic(TestIndexWriterExceptions.java:2340) >>[junit4] 2> at >> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>[junit4] 2> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>[junit4] 2> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>[junit4] 2> at java.lang.reflect.Method.invoke(Method.java:497) >>[junit4] 2> at >> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) >>[junit4] 2> at >> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) >>[junit4] 2> at >> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) >>[junit4] 2> at >> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) >>[junit4] 2> at >> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) >>[junit4] 2> at >>
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035524#comment-15035524 ] ASF subversion and git services commented on SOLR-8330: --- Commit 1717590 from [~anshumg] in branch 'dev/trunk' [ https://svn.apache.org/r1717590 ] SOLR-8330: Standardize and fix logger creation and usage so that they aren't shared across source files. > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8308) Core gets inaccessible after RENAME operation with special characters
[ https://issues.apache.org/jira/browse/SOLR-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035532#comment-15035532 ] Uwe Schindler commented on SOLR-8308: - bq. {{Pattern p = Pattern.compile("^[\\._A-Za-z0-9]*$");}} You should define the pattern static final on the class. Otherwise the {{^}} and {{$}} are not needed because you use {{Matcher#matches()}} that always matches whole String. > Core gets inaccessible after RENAME operation with special characters > - > > Key: SOLR-8308 > URL: https://issues.apache.org/jira/browse/SOLR-8308 > Project: Solr > Issue Type: Bug >Reporter: Adam Johnson > Attachments: SOLR-8308.patch, SOLR-8308.patch > > > You can rename a core using the following modified URL > https://SOLR:PORT/solr/admin/cores?wt=json=false=RENAME=test_app_shared2_replica2=%3Csvg+onload%3Dalert(1)%3E&_=1445468005152. > The core becomes inaccessible / unusable. There should be more form > validation to the core name assignment -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1619: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1619/ No tests ran. Build Log: [...truncated 37266 lines...] -validate-maven-dependencies: [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from sonatype.releases [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-sandbox:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-sandbox:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies]
[jira] [Commented] (SOLR-5031) Fix Language Analysis page to use images for non latin in/out examples
[ https://issues.apache.org/jira/browse/SOLR-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036663#comment-15036663 ] Hoss Man commented on SOLR-5031: more pain, serbian description/examples: https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=32604242=61=62 > Fix Language Analysis page to use images for non latin in/out examples > -- > > Key: SOLR-5031 > URL: https://issues.apache.org/jira/browse/SOLR-5031 > Project: Solr > Issue Type: Sub-task > Components: documentation >Reporter: Hoss Man >Assignee: Hoss Man > > See SOLR-4886 for more background, but the gist of the problem... > * confluence PDF exporting has problems with RTL languages (ie: Arabic & > Hindi) and other problems with some char->glyph conversion. > * since our "official" published version of these docs is going to be PDF, we > need to ensure it looks right there -- we can't just have a caveat that they > need to view the original wiki in a browser to see the examples correctly > * we need to update the non-latin language sections to either remove the > problematic examples, or switch them to use image attachments (perhaps > screenshots of the (admin UI analysis tool) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-5.4-Linux (64bit/jdk1.9.0-ea-b93) - Build # 302 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/302/ Java: 64bit/jdk1.9.0-ea-b93 -XX:-UseCompressedOops -XX:+UseSerialGC -XX:-CompactStrings 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ERROR: SolrIndexSearcher opens=51 closes=50 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50 at __randomizedtesting.SeedInfo.seed([D4A2D4D2B1A2E5A1]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:452) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:222) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:747) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=3158, name=searcherExecutor-1417-thread-1, state=WAITING, group=TGRP-TestLazyCores] at jdk.internal.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) at java.lang.Thread.run(Thread.java:747) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=3158, name=searcherExecutor-1417-thread-1, state=WAITING, group=TGRP-TestLazyCores] at jdk.internal.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) at java.lang.Thread.run(Thread.java:747) at __randomizedtesting.SeedInfo.seed([D4A2D4D2B1A2E5A1]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be
[jira] [Commented] (SOLR-6673) MDC based logging of collection, shard etc.
[ https://issues.apache.org/jira/browse/SOLR-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036786#comment-15036786 ] Mike Drob commented on SOLR-6673: - Did this ever make it into the wiki or ref guide? > MDC based logging of collection, shard etc. > --- > > Key: SOLR-6673 > URL: https://issues.apache.org/jira/browse/SOLR-6673 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Assignee: Noble Paul > Labels: logging > Fix For: 5.1, Trunk > > Attachments: SOLR-6673-log-more.patch, SOLR-6673.patch, > SOLR-6673.patch, SOLR-6673.patch, SOLR-6673.patch, SOLR-6673.patch, > SOLR-6673.patch, SOLR-6673.patch, SOLR-6673.patch, SOLR-6673_NPE_fix.patch, > log4j.properties, log4j.properties > > > In cloud mode, the many log items don't contain the collection name, shard > name, core name etc. Debugging becomes specially difficult when many > collections/shards are hosted on the same node. > The proposed solution adds MDC based stamping of collection, shard, core to > the thread. > See also: SOLR-5969, SOLR-5277 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8362) Add docValues support for TextField
Hoss Man created SOLR-8362: -- Summary: Add docValues support for TextField Key: SOLR-8362 URL: https://issues.apache.org/jira/browse/SOLR-8362 Project: Solr Issue Type: Improvement Reporter: Hoss Man At the last lucene/solr revolution, Toke asked a question about why TextField doesn't support docValues. The short answer is because no one ever added it, but the longer answer was because we would have to think through carefully the _intent_ of supporting docValues for a "tokenized" field like TextField, and how to support various conflicting usecases where they could be handy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8361) Failed to create collection in Solrcloud
[ https://issues.apache.org/jira/browse/SOLR-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-8361. -- Resolution: Invalid Please raise this kind of issue on the user's list first, we try to reserve JIRAs for known bugs/improvements, so if you're not _sure_ it's a bug it's best to ask there first. Also, more people see the question faster and you may get the help you need more quickly. When you do ask your question there, please include the results of looking at your Solr log when you issue the create command, any errors reported there may contain more useful information. To subscribe to the user's list, see the "mailing lists" entry here: http://lucene.apache.org/solr/resources.html > Failed to create collection in Solrcloud > > > Key: SOLR-8361 > URL: https://issues.apache.org/jira/browse/SOLR-8361 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: mugeesh > Fix For: 5.3 > > Original Estimate: 504h > Remaining Estimate: 504h > > Hi, > I am using 3 server ,solr1,solr2a sn solr3 > I have setup 3 instance of zookeeper in server solr 2 > when i try to create 1 shards and 2 replica, it work find. > while i am try to create core with 1 shards with 3 replication,using this > command bin/solr create -c abc -n abcr -shards 1 -replicationFactor 3 > I am getting below error, > ERROR: Failed to create collection 'abc' due to: > org.apache.solr.client.solrj.SolrServerException:IOException occured when > talking to server at: http://xx.yyy.zz:8985/solr (server -solr 3) > solr 3:) i did start this server using this command bin/solr start -cloud -p > 8985 -s "example/cloud/node1/solr" -z solr2:2181,solr2:2182,solr3:2183 > what is the issue i unable to solved it. > I thought it is a bug in solr 5.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5868) JoinUtil support for NUMERIC docValues fields
[ https://issues.apache.org/jira/browse/LUCENE-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-5868: - Attachment: LUCENE-5868.patch even more efficient shuffling in test [^LUCENE-5868.patch] > JoinUtil support for NUMERIC docValues fields > -- > > Key: LUCENE-5868 > URL: https://issues.apache.org/jira/browse/LUCENE-5868 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Fix For: 5.5 > > Attachments: LUCENE-5868-lambdarefactoring.patch, > LUCENE-5868-lambdarefactoring.patch, LUCENE-5868.patch, LUCENE-5868.patch, > LUCENE-5868.patch, LUCENE-5868.patch, LUCENE-5868.patch, qtj.diff > > > while polishing SOLR-6234 I found that JoinUtil can't join int dv fields at > least. > I plan to provide test/patch. It might be important, because Solr's join can > do that. Please vote if you care! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles
[ https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036726#comment-15036726 ] Michael McCandless commented on LUCENE-6908: Thanks [~nknize]! Lots of geo math that I don't understand :) It hit a failure after some beasting: {noformat} [junit4] Started J0 PID(28075@localhost). [junit4] Suite: org.apache.lucene.util.TestGeoUtils [junit4] 1> doc=2304 matched but should not [junit4] 1> lon=-52.68950667232275 lat=-89.24937685020268 distanceMeters=267809.96722662856 vs radiusMeters=263432.23741533695 [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestGeoUtils -Dtests.method=testGeoRelations -Dtests.seed=3B46325723FB86A9 -Dtests.multiplier=5 -Dtests.slow=true -Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed -Dtests.locale=vi_VN -Dtests.timezone=America/Halifax -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 2.61s | TestGeoUtils.testGeoRelations <<< [junit4]> Throwable #1: java.lang.AssertionError: 1 incorrect hits (see above) [junit4]>at __randomizedtesting.SeedInfo.seed([3B46325723FB86A9:F96526E2570CF017]:0) [junit4]>at org.apache.lucene.util.TestGeoUtils.testGeoRelations(TestGeoUtils.java:533) [junit4]>at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE: test params are: codec=HighCompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=HIGH_COMPRESSION, chunkSize=9, maxDocsPerChunk=323, blockSize=5), termVectorsFormat=CompressingTermVectorsFormat(compressionMode=HIGH_COMPRESSION, chunkSize=9, blockSize=5)), sim=RandomSimilarityProvider(queryNorm=true,coord=yes): {}, locale=vi_VN, timezone=America/Halifax [junit4] 2> NOTE: Linux 3.13.0-61-generic amd64/Oracle Corporation 1.8.0_60 (64-bit)/cpus=8,threads=1,free=396588944,total=504889344 [junit4] 2> NOTE: All tests run in this JVM: [TestGeoUtils] [junit4] Completed [1/1] in 2.77s, 1 test, 1 failure <<< FAILURES! {noformat} Also, this comment seems stale now? (We seem to be using this method?): {noformat} // NOTE: not used for 2d at the moment. used for 3d w/ altitude (we can keep or add back) {noformat} > TestGeoUtils.testGeoRelations is buggy with irregular rectangles > > > Key: LUCENE-6908 > URL: https://issues.apache.org/jira/browse/LUCENE-6908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize > Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch > > > The {{.testGeoRelations}} method doesn't exactly test the behavior of > GeoPoint*Query as its using the BKD split technique (instead of quad cell > division) to divide the space on each pass. For "large" distance queries this > can create a lot of irregular rectangles producing large radial distortion > error when using the cartesian approximation methods provided by > {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian > approximation methods on irregular rectangles without having to cut over to > an expensive oblate geometry approach. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields
[ https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036972#comment-15036972 ] Keith Laban commented on SOLR-8220: --- My only concern is that we may have to add a flag for this and for whatever we decide in SOLR-8344 which is just going to add to he confusion. Thoughts? > Read field from docValues for non stored fields > --- > > Key: SOLR-8220 > URL: https://issues.apache.org/jira/browse/SOLR-8220 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban > Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, > SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, > SOLR-8220.patch > > > Many times a value will be both stored="true" and docValues="true" which > requires redundant data to be stored on disk. Since reading from docValues is > both efficient and a common practice (facets, analytics, streaming, etc), > reading values from docValues when a stored version of the field does not > exist would be a valuable disk usage optimization. > The only caveat with this that I can see would be for multiValued fields as > they would always be returned sorted in the docValues approach. I believe > this is a fair compromise. > I've done a rough implementation for this as a field transform, but I think > it should live closer to where stored fields are loaded in the > SolrIndexSearcher. > Two open questions/observations: > 1) There doesn't seem to be a standard way to read values for docValues, > facets, analytics, streaming, etc, all seem to be doing their own ways, > perhaps some of this logic should be centralized. > 2) What will the API behavior be? (Below is my proposed implementation) > Parameters for fl: > - fl="docValueField" > -- return field from docValue if the field is not stored and in docValues, > if the field is stored return it from stored fields > - fl="*" > -- return only stored fields > - fl="+" >-- return stored fields and docValue fields > 2a - would be easiest implementation and might be sufficient for a first > pass. 2b - is current behavior -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 229 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/229/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: org.apache.solr.schema.TestCloudManagedSchemaConcurrent.test Error Message: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:52067 within 3 ms Stack Trace: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:52067 within 3 ms at __randomizedtesting.SeedInfo.seed([68D09B5EF7F5107A:E084A48459097D82]:0) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97) at org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:283) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1475) at org.apache.solr.schema.TestCloudManagedSchemaConcurrent.distribTearDown(TestCloudManagedSchemaConcurrent.java:69) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:942) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:52067 within 3 ms at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:209) at
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 870 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/870/ 3 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange Error Message: Timeout while trying to assert number of documents on collection: target_collection Stack Trace: java.lang.AssertionError: Timeout while trying to assert number of documents on collection: target_collection at __randomizedtesting.SeedInfo.seed([8FA0AF1B622EEDB7:5D50E3F83C814B85]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:265) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:292) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-8318) SimpleQueryParser doesn't use MultiTermAnalysis for Fuzzy Queries
[ https://issues.apache.org/jira/browse/SOLR-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037191#comment-15037191 ] Erick Erickson commented on SOLR-8318: -- OK. LGTM then. I'll check this in in a day or so unless someone objects. > SimpleQueryParser doesn't use MultiTermAnalysis for Fuzzy Queries > - > > Key: SOLR-8318 > URL: https://issues.apache.org/jira/browse/SOLR-8318 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 5.3 >Reporter: Tom Hill >Assignee: Erick Erickson >Priority: Trivial > Attachments: sqp_fuzzy_multiterm.patch > > > Fuzzy queries in SimpleQParserPlugin don't seem to use the multi-term > analysis chain. Prefix queries do, and SolrQueryParser does use multi-term > analysis for fuzzy queries, so it seems like SimpleQParserPlugin should as > well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037218#comment-15037218 ] ASF subversion and git services commented on SOLR-8330: --- Commit 1717707 from [~anshumg] in branch 'dev/branches/lucene_solr_5_4' [ https://svn.apache.org/r1717707 ] SOLR-8330: Standardize and fix logger creation and usage so that they aren't shared across source files.(merge from branch_5x) > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6837) Add N-best output capability to JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037353#comment-15037353 ] ASF subversion and git services commented on LUCENE-6837: - Commit 1717713 from [~cm] in branch 'dev/trunk' [ https://svn.apache.org/r1717713 ] Add n-best output to JapaneseTokenizer (LUCENE-6837) > Add N-best output capability to JapaneseTokenizer > - > > Key: LUCENE-6837 > URL: https://issues.apache.org/jira/browse/LUCENE-6837 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 5.3 >Reporter: KONNO, Hiroharu >Assignee: Christian Moen >Priority: Minor > Attachments: LUCENE-6837.patch, LUCENE-6837.patch, LUCENE-6837.patch, > LUCENE-6837.patch, LUCENE-6837.patch > > > Japanese morphological analyzers often generate mis-segmented tokens. N-best > output reduces the impact of mis-segmentation on search result. N-best output > is more meaningful than character N-gram, and it increases hit count too. > If you use N-best output, you can get decompounded tokens (ex: > "シニアソフトウェアエンジニア" => {"シニア", "シニアソフトウェアエンジニア", "ソフトウェア", "エンジニア"}) and > overwrapped tokens (ex: "数学部長谷川" => {"数学", "部", "部長", "長谷川", "谷川"}), > depending on the dictionary and N-best parameter settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc
[ https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037375#comment-15037375 ] Mark Miller commented on SOLR-6915: --- Another option is just hard coding to one working locale for now right? > SaslZkACLProvider and Kerberos Test Using MiniKdc > - > > Key: SOLR-6915 > URL: https://issues.apache.org/jira/browse/SOLR-6915 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Fix For: 5.1, Trunk > > Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, > tests-failures.txt > > > We should provide a ZkACLProvider that requires SASL authentication. This > provider will be useful for administration in a kerberos environment. In > such an environment, the administrator wants solr to authenticate to > zookeeper using SASL, since this is only way to authenticate with zookeeper > via kerberos. > The authorization model in such a setup can vary, e.g. you can imagine a > scenario where solr owns (is the only writer of) the non-config znodes, but > some set of trusted users are allowed to modify the configs. It's hard to > predict all the possibilities here, but one model that seems generally useful > is to have a model where solr itself owns all the znodes and all actions that > require changing the znodes are routed to Solr APIs. That seems simple and > reasonable as a first version. > As for testing, I noticed while working on SOLR-6625 that we don't really > have any infrastructure for testing kerberos integration in unit tests. > Internally, I've been testing using kerberos-enabled VM clusters, but this > isn't great since we won't notice any breakages until someone actually spins > up a VM. So part of this JIRA is to provide some infrastructure for testing > kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037088#comment-15037088 ] Jason Gerlowski edited comment on SOLR-8359 at 12/3/15 2:37 AM: Ah, I forgot to save in Eclipse before making the patch. Find the updated version attached. I see your point about the cached {{debug}} variables not getting refreshed ever. {{UpdateLog.preSoftCommit()}} will update the cached value in the abstract parent, but won't affect the values in {{HdfsUpdateLog}} etc. One way to fix this would be to implement a {{preSoftCommit()}} method in each of this children which resets the cached value in the child before calling {{super.preSoftCommit()}}. Not sure if there's a better solution though. It seems like there should be a better way to do this... was (Author: gerlowskija): Ah, I forgot to save in Eclipse before making the patch. Find the updated version attached. I see your point about the cached {{debug}} variables not getting refreshed ever. {{UpdateLog.preSoftCommit()}} will update the cached value in the abstract parent, but won't affect the values in {{CdcrTransactionLog}} etc. One way to fix this would be to implement a {{preSoftCommit()}} method in each of this children which resets the cached value in the child before calling {{super.preSoftCommit()}}. Not sure if there's a better solution though. It seems like there should be a better way to do this... > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > Attachments: SOLR-8359.patch, SOLR-8359.patch > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5440 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5440/ Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ERROR: SolrIndexSearcher opens=51 closes=50 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50 at __randomizedtesting.SeedInfo.seed([B43D8A0D2090BF9]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=10239, name=searcherExecutor-4978-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=10239, name=searcherExecutor-4978-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([B43D8A0D2090BF9]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=10239,
Re: [jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields
Please follow the instructions here to unsubscribe. Note, you _MUST_ use the exact same e-mail you subscribed with. See the "unsubscribe" link here: http://lucene.apache.org/solr/resources.html If you have problems, have you tried to follow the instructions here? https://wiki.apache.org/solr/Unsubscribing%20from%20mailing%20lists Best, Erick On Wed, Dec 2, 2015 at 5:55 PM, Johny Johnwrote: > dev-unsubscr...@lucene.apache.org > > Sent from Windows Mail > > From: Keith Laban (JIRA) > Sent: Thursday, December 3, 2015 6:19 AM > To: dev@lucene.apache.org > > > [ > https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036972#comment-15036972 > ] > > Keith Laban commented on SOLR-8220: > --- > > My only concern is that we may have to add a flag for this and for whatever > we decide in SOLR-8344 which is just going to add to he confusion. Thoughts? > >> Read field from docValues for non stored fields >> --- >> >> Key: SOLR-8220 >> URL: https://issues.apache.org/jira/browse/SOLR-8220 >> Project: Solr >> Issue Type: Improvement >>Reporter: Keith Laban >> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, >> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, >> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, >> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, >> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, >> SOLR-8220.patch >> >> >> Many times a value will be both stored="true" and docValues="true" which >> requires redundant data to be stored on disk. Since reading from docValues >> is both efficient and a common practice (facets, analytics, streaming, etc), >> reading values from docValues when a stored version of the field does not >> exist would be a valuable disk usage optimization. >> The only caveat with this that I can see would be for multiValued fields >> as they would always be returned sorted in the docValues approach. I believe >> this is a fair compromise. >> I've done a rough implementation for this as a field transform, but I >> think it should live closer to where stored fields are loaded in the >> SolrIndexSearcher. >> Two open questions/observations: >> 1) There doesn't seem to be a standard way to read values for docValues, >> facets, analytics, streaming, etc, all seem to be doing their own ways, >> perhaps some of this logic should be centralized. >> 2) What will the API behavior be? (Below is my proposed implementation) >> Parameters for fl: >> - fl="docValueField" >> -- return field from docValue if the field is not stored and in >> docValues, if the field is stored return it from stored fields >> - fl="*" >> -- return only stored fields >> - fl="+" >>-- return stored fields and docValue fields >> 2a - would be easiest implementation and might be sufficient for a first >> pass. 2b - is current behavior > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332) > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8359) Restrict child classes from using parent logger's state
[ https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-8359: -- Attachment: SOLR-8359-nonfinal-values.patch I've attached a version of the patch (SOLR-8359-nonfinal-values.patch) which allows the cached {{debug}} value in {{HdfsUpdateLog}} to be updated on each call to {{preSoftCommit}}. Not sure that we want to take the approach I mentioned for this, but I couldn't think of any other way to do it, so I figured I'd upload it so people can at least offer feedback/suggest alternatives. > Restrict child classes from using parent logger's state > --- > > Key: SOLR-8359 > URL: https://issues.apache.org/jira/browse/SOLR-8359 > Project: Solr > Issue Type: Sub-task >Reporter: Mike Drob > Fix For: Trunk > > Attachments: SOLR-8359-nonfinal-values.patch, SOLR-8359.patch, > SOLR-8359.patch > > > In SOLR-8330 we split up a lot of loggers. However, there are a few classes > that still use their parent's logging state and configuration indirectly. > {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class > cached read of {{boolean debug = log.isDebugEnabled()}}, when they should > check their own loggers instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037236#comment-15037236 ] Anshum Gupta commented on SOLR-8330: It's now in 5.4. > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: 5.4, Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-8330: --- Fix Version/s: 5.4 > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: Trunk >Reporter: Jason Gerlowski >Assignee: Anshum Gupta > Labels: logging > Fix For: 5.4, Trunk > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037358#comment-15037358 ] Modassar Ather commented on LUCENE-5205: Hi [~talli...@mitre.org] A prefix in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:refasten). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not used the way it is. Thanks, Modassar > SpanQueryParser with recursion, analysis and syntax very similar to classic > QueryParser > --- > > Key: LUCENE-5205 > URL: https://issues.apache.org/jira/browse/LUCENE-5205 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Tim Allison > Labels: patch > Attachments: LUCENE-5205-cleanup-tests.patch, > LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, > LUCENE-5205_dateTestReInitPkgPrvt.patch, > LUCENE-5205_improve_stop_word_handling.patch, > LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, > SpanQueryParser_v1.patch.gz, patch.txt > > > This parser extends QueryParserBase and includes functionality from: > * Classic QueryParser: most of its syntax > * SurroundQueryParser: recursive parsing for "near" and "not" clauses. > * ComplexPhraseQueryParser: can handle "near" queries that include multiterms > (wildcard, fuzzy, regex, prefix), > * AnalyzingQueryParser: has an option to analyze multiterms. > At a high level, there's a first pass BooleanQuery/field parser and then a > span query parser handles all terminal nodes and phrases. > Same as classic syntax: > * term: test > * fuzzy: roam~0.8, roam~2 > * wildcard: te?t, test*, t*st > * regex: /\[mb\]oat/ > * phrase: "jakarta apache" > * phrase with slop: "jakarta apache"~3 > * default "or" clause: jakarta apache > * grouping "or" clause: (jakarta apache) > * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta > * multiple fields: title:lucene author:hatcher > > Main additions in SpanQueryParser syntax vs. classic syntax: > * Can require "in order" for phrases with slop with the \~> operator: > "jakarta apache"\~>3 > * Can specify "not near": "fever bieber"!\~3,10 :: > find "fever" but not if "bieber" appears within 3 words before or 10 > words after it. > * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta > apache\]~3 lucene\]\~>4 :: > find "jakarta" within 3 words of "apache", and that hit has to be within > four words before "lucene" > * Can also use \[\] for single level phrasal queries instead of " as in: > \[jakarta apache\] > * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 > :: find "apache" and then either "lucene" or "solr" within three words. > * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2 > * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ > /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two > words of "ap*che" and that hit has to be within ten words of something like > "solr" or that "lucene" regex. > * Can require at least x number of hits at boolean level: "apache AND (lucene > solr tika)~2 > * Can use negative only query: -jakarta :: Find all docs that don't contain > "jakarta" > * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of > potential performance issues!). > Trivial additions: > * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, > prefix =2) > * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance > <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein) > This parser can be very useful for concordance tasks (see also LUCENE-5317 > and LUCENE-5318) and for analytical search. > Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery. > Most of the documentation is in the javadoc for SpanQueryParser. > Any and all feedback is welcome. Thank you. > Until this is added to the Lucene project, I've added a standalone > lucene-addons repo (with jars compiled for the latest stable build of Lucene) > on [github|https://github.com/tballison/lucene-addons]. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037358#comment-15037358 ] Modassar Ather edited comment on LUCENE-5205 at 12/3/15 6:41 AM: - Hi [~talli...@mitre.org] A prefix in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:term). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not be used the way it is. Thanks, Modassar was (Author: modassar): Hi [~talli...@mitre.org] A prefix in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:refasten). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not used the way it is. Thanks, Modassar > SpanQueryParser with recursion, analysis and syntax very similar to classic > QueryParser > --- > > Key: LUCENE-5205 > URL: https://issues.apache.org/jira/browse/LUCENE-5205 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Tim Allison > Labels: patch > Attachments: LUCENE-5205-cleanup-tests.patch, > LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, > LUCENE-5205_dateTestReInitPkgPrvt.patch, > LUCENE-5205_improve_stop_word_handling.patch, > LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, > SpanQueryParser_v1.patch.gz, patch.txt > > > This parser extends QueryParserBase and includes functionality from: > * Classic QueryParser: most of its syntax > * SurroundQueryParser: recursive parsing for "near" and "not" clauses. > * ComplexPhraseQueryParser: can handle "near" queries that include multiterms > (wildcard, fuzzy, regex, prefix), > * AnalyzingQueryParser: has an option to analyze multiterms. > At a high level, there's a first pass BooleanQuery/field parser and then a > span query parser handles all terminal nodes and phrases. > Same as classic syntax: > * term: test > * fuzzy: roam~0.8, roam~2 > * wildcard: te?t, test*, t*st > * regex: /\[mb\]oat/ > * phrase: "jakarta apache" > * phrase with slop: "jakarta apache"~3 > * default "or" clause: jakarta apache > * grouping "or" clause: (jakarta apache) > * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta > * multiple fields: title:lucene author:hatcher > > Main additions in SpanQueryParser syntax vs. classic syntax: > * Can require "in order" for phrases with slop with the \~> operator: > "jakarta apache"\~>3 > * Can specify "not near": "fever bieber"!\~3,10 :: > find "fever" but not if "bieber" appears within 3 words before or 10 > words after it. > * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta > apache\]~3 lucene\]\~>4 :: > find "jakarta" within 3 words of "apache", and that hit has to be within > four words before "lucene" > * Can also use \[\] for single level phrasal queries instead of " as in: > \[jakarta apache\] > * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 > :: find "apache" and then either "lucene" or "solr" within three words. > * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2 > * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ > /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two > words of "ap*che" and that hit has to be within ten words of something like > "solr" or that "lucene" regex. > * Can require at least x number of hits at boolean level: "apache AND (lucene > solr tika)~2 > * Can use negative only query: -jakarta :: Find all docs that don't contain > "jakarta" > * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of > potential performance issues!). > Trivial additions: > * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, > prefix =2) > * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance > <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein) > This parser can be very useful for concordance tasks (see also LUCENE-5317 > and LUCENE-5318) and for analytical search. > Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery. > Most of the documentation is in the javadoc for SpanQueryParser. > Any and all feedback is welcome. Thank you. > Until this is added to the Lucene project, I've added a standalone > lucene-addons repo (with jars compiled for the latest stable build of Lucene) > on [github|https://github.com/tballison/lucene-addons]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr
[ https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-8131: Attachment: SOLR-8131.patch Updated patch which trunk which takes Erik's feedback into account. With the current patch this will be the change in behaviour on trunk. - If no schema factory is mentioned in the solrconfig.xml file then ManagedSchemaFactory will be used by default if Lucene_Version > 6.0 else ClassicSchemaFactory will be used. - All shipped configs don't explicitly mention a schema factory. Hence it will default to ManagedSchemaFactory automatically in trunk. > Make ManagedIndexSchemaFactory as the default in Solr > - > > Key: SOLR-8131 > URL: https://issues.apache.org/jira/browse/SOLR-8131 > Project: Solr > Issue Type: Wish > Components: Data-driven Schema, Schema and Analysis >Reporter: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-high > Fix For: 5.4, Trunk > > Attachments: SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch > > > The techproducts and other examples shipped with Solr all use the > ClassicIndexSchemaFactory which disables all Schema APIs which need to modify > schema. It'd be nice to be able to support both read/write schema APIs > without needing to enable data-driven or schema-less mode. > I propose to change all 5.x examples to explicitly use > ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default > in trunk (6.x). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037358#comment-15037358 ] Modassar Ather edited comment on LUCENE-5205 at 12/3/15 6:43 AM: - Hi [~talli...@mitre.org] Wildard (*) in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:term). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not be used the way it is in the example i.e a single wildcard term within double quotes. Thanks, Modassar was (Author: modassar): Hi [~talli...@mitre.org] A prefix in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:term). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not be used the way it is. Thanks, Modassar > SpanQueryParser with recursion, analysis and syntax very similar to classic > QueryParser > --- > > Key: LUCENE-5205 > URL: https://issues.apache.org/jira/browse/LUCENE-5205 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Tim Allison > Labels: patch > Attachments: LUCENE-5205-cleanup-tests.patch, > LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, > LUCENE-5205_dateTestReInitPkgPrvt.patch, > LUCENE-5205_improve_stop_word_handling.patch, > LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, > SpanQueryParser_v1.patch.gz, patch.txt > > > This parser extends QueryParserBase and includes functionality from: > * Classic QueryParser: most of its syntax > * SurroundQueryParser: recursive parsing for "near" and "not" clauses. > * ComplexPhraseQueryParser: can handle "near" queries that include multiterms > (wildcard, fuzzy, regex, prefix), > * AnalyzingQueryParser: has an option to analyze multiterms. > At a high level, there's a first pass BooleanQuery/field parser and then a > span query parser handles all terminal nodes and phrases. > Same as classic syntax: > * term: test > * fuzzy: roam~0.8, roam~2 > * wildcard: te?t, test*, t*st > * regex: /\[mb\]oat/ > * phrase: "jakarta apache" > * phrase with slop: "jakarta apache"~3 > * default "or" clause: jakarta apache > * grouping "or" clause: (jakarta apache) > * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta > * multiple fields: title:lucene author:hatcher > > Main additions in SpanQueryParser syntax vs. classic syntax: > * Can require "in order" for phrases with slop with the \~> operator: > "jakarta apache"\~>3 > * Can specify "not near": "fever bieber"!\~3,10 :: > find "fever" but not if "bieber" appears within 3 words before or 10 > words after it. > * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta > apache\]~3 lucene\]\~>4 :: > find "jakarta" within 3 words of "apache", and that hit has to be within > four words before "lucene" > * Can also use \[\] for single level phrasal queries instead of " as in: > \[jakarta apache\] > * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 > :: find "apache" and then either "lucene" or "solr" within three words. > * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2 > * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ > /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two > words of "ap*che" and that hit has to be within ten words of something like > "solr" or that "lucene" regex. > * Can require at least x number of hits at boolean level: "apache AND (lucene > solr tika)~2 > * Can use negative only query: -jakarta :: Find all docs that don't contain > "jakarta" > * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of > potential performance issues!). > Trivial additions: > * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, > prefix =2) > * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance > <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein) > This parser can be very useful for concordance tasks (see also LUCENE-5317 > and LUCENE-5318) and for analytical search. > Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery. > Most of the documentation is in the javadoc for SpanQueryParser. > Any and all feedback is welcome. Thank you. > Until this is added to the Lucene project, I've added a standalone > lucene-addons repo (with jars compiled for the latest stable build of Lucene) > on [github|https://github.com/tballison/lucene-addons]. --
[jira] [Comment Edited] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037358#comment-15037358 ] Modassar Ather edited comment on LUCENE-5205 at 12/3/15 6:44 AM: - Hi [~talli...@mitre.org] Wildard * in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:term). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not be used the way it is in the example i.e a single wildcard term within double quotes. Thanks, Modassar was (Author: modassar): Hi [~talli...@mitre.org] Wildard (*) in double quotes is getting eaten up by the analyzer. E.g. f:"term*" is getting parsed to (+f:term). What I understand the wildcard queries are not analyzed. I noticed that the same works fine within a phrase. E.g. f:"term1 term*". Please let me know if it is an issue or it should not be used the way it is in the example i.e a single wildcard term within double quotes. Thanks, Modassar > SpanQueryParser with recursion, analysis and syntax very similar to classic > QueryParser > --- > > Key: LUCENE-5205 > URL: https://issues.apache.org/jira/browse/LUCENE-5205 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Tim Allison > Labels: patch > Attachments: LUCENE-5205-cleanup-tests.patch, > LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, > LUCENE-5205_dateTestReInitPkgPrvt.patch, > LUCENE-5205_improve_stop_word_handling.patch, > LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, > SpanQueryParser_v1.patch.gz, patch.txt > > > This parser extends QueryParserBase and includes functionality from: > * Classic QueryParser: most of its syntax > * SurroundQueryParser: recursive parsing for "near" and "not" clauses. > * ComplexPhraseQueryParser: can handle "near" queries that include multiterms > (wildcard, fuzzy, regex, prefix), > * AnalyzingQueryParser: has an option to analyze multiterms. > At a high level, there's a first pass BooleanQuery/field parser and then a > span query parser handles all terminal nodes and phrases. > Same as classic syntax: > * term: test > * fuzzy: roam~0.8, roam~2 > * wildcard: te?t, test*, t*st > * regex: /\[mb\]oat/ > * phrase: "jakarta apache" > * phrase with slop: "jakarta apache"~3 > * default "or" clause: jakarta apache > * grouping "or" clause: (jakarta apache) > * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta > * multiple fields: title:lucene author:hatcher > > Main additions in SpanQueryParser syntax vs. classic syntax: > * Can require "in order" for phrases with slop with the \~> operator: > "jakarta apache"\~>3 > * Can specify "not near": "fever bieber"!\~3,10 :: > find "fever" but not if "bieber" appears within 3 words before or 10 > words after it. > * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta > apache\]~3 lucene\]\~>4 :: > find "jakarta" within 3 words of "apache", and that hit has to be within > four words before "lucene" > * Can also use \[\] for single level phrasal queries instead of " as in: > \[jakarta apache\] > * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 > :: find "apache" and then either "lucene" or "solr" within three words. > * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2 > * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ > /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two > words of "ap*che" and that hit has to be within ten words of something like > "solr" or that "lucene" regex. > * Can require at least x number of hits at boolean level: "apache AND (lucene > solr tika)~2 > * Can use negative only query: -jakarta :: Find all docs that don't contain > "jakarta" > * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of > potential performance issues!). > Trivial additions: > * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, > prefix =2) > * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance > <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein) > This parser can be very useful for concordance tasks (see also LUCENE-5317 > and LUCENE-5318) and for analytical search. > Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery. > Most of the documentation is in the javadoc for SpanQueryParser. > Any and all feedback is welcome. Thank you. > Until this is added to the Lucene project, I've added a standalone > lucene-addons repo (with jars compiled for the latest stable build of Lucene)
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15099 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15099/ Java: 32bit/jdk1.8.0_66 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Error from server at http://127.0.0.1:33148/awholynewcollection_0: Expected mime type application/octet-stream but got text/html.Error 500HTTP ERROR: 500 Problem accessing /awholynewcollection_0/select. Reason: {msg=Error trying to proxy request for url: http://127.0.0.1:38731/awholynewcollection_0/select,trace=org.apache.solr.common.SolrException: Error trying to proxy request for url: http://127.0.0.1:38731/awholynewcollection_0/select at org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:591) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:111) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:437) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119) at org.eclipse.jetty.server.Server.handle(Server.java:517) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:261) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:226) at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:195) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423) at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) at org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:558) ... 28 more ,code=500} http://eclipse.org/jetty;>Powered by Jetty:// 9.3.6.v20151106 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:33148/awholynewcollection_0: Expected mime type application/octet-stream but got text/html. Error 500 HTTP ERROR: 500 Problem accessing /awholynewcollection_0/select. Reason: {msg=Error trying to proxy request for url: http://127.0.0.1:38731/awholynewcollection_0/select,trace=org.apache.solr.common.SolrException: Error trying to proxy request for url: http://127.0.0.1:38731/awholynewcollection_0/select at org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:591) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr
[ https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037376#comment-15037376 ] Varun Thacker commented on SOLR-8131: - Hi Erik, bq. ShowFileRequestHandlerTest.java: maybe use the DEFAULT_MANAGED_SCHEMA_RESOURCE_NAME constant instead on "QueryRequest request = new QueryRequest(params("file","managed-schema"));" I didn't make this change. The argument being we should leave hardcoded constants in our tests. That way is someone in the future accidentally changed DEFAULT_MANAGED_SCHEMA_RESOURCE_NAME the tests would atleast catch it as a break. bq. explicitly in all the shipped configs and just leave it out entirely. (for one, no one really needs to change the resource name) Incorporated in the patch > Make ManagedIndexSchemaFactory as the default in Solr > - > > Key: SOLR-8131 > URL: https://issues.apache.org/jira/browse/SOLR-8131 > Project: Solr > Issue Type: Wish > Components: Data-driven Schema, Schema and Analysis >Reporter: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-high > Fix For: 5.4, Trunk > > Attachments: SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch > > > The techproducts and other examples shipped with Solr all use the > ClassicIndexSchemaFactory which disables all Schema APIs which need to modify > schema. It'd be nice to be able to support both read/write schema APIs > without needing to enable data-driven or schema-less mode. > I propose to change all 5.x examples to explicitly use > ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default > in trunk (6.x). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5439 - Failure!
: Date: Wed, 2 Dec 2015 16:20:52 -0600 : From: "Dyer, James": Reply-To: dev@lucene.apache.org : To: "dev@lucene.apache.org" : Subject: RE: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # : 5439 - Failure! : : I'm looking at this failure. I cannot reproduce this on Linux using: : : ant test -Dtests.class="*.SpellCheckComponentTest" -Dtests.seed=110D525A21D16B1:8944EAFF0CE17B49 James - not sure if you noticed but assertJQ logs the entire response when an assertion fails... [junit4] 2> expected=/spellcheck=={'suggestions':['documemt',{'numFound':1,'startOffset':0,'endOffset':8,'suggestion':['document']}]} [junit4] 2> response = { [junit4] 2> "responseHeader":{ [junit4] 2> "status":0, [junit4] 2> "QTime":0}, [junit4] 2> "response":{"numFound":0,"start":0,"docs":[] [junit4] 2> }, [junit4] 2> "spellcheck":{ [junit4] 2> "suggestions":[]}} [junit4] 2> [junit4] 2> request = q=documemt=spellCheckCompRH=true=xml ...I haven't dug into the code being tested, or even the logs from jenkins, but if it's rebuilding any spellcheck structures in a newSearcher listener, then maybe that's slowing down opening the searcher so that when the test is run the docs (and suggestions) aren't always visible yet? maybe try adding a waitSearcher=true to the commit() calls? or switch to some explicitly blocking client call to generate those spellcheck structures/indexes? : Tomorrow I will try this on Windows. : : James Dyer : Ingram Content Group : : -Original Message- : From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] : Sent: Wednesday, December 02, 2015 12:53 PM : To: ans...@apache.org; mikemcc...@apache.org; sha...@apache.org; romseyg...@apache.org; dev@lucene.apache.org : Subject: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5439 - Failure! : : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/ : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC : : 1 tests failed. : FAILED: org.apache.solr.handler.component.SpellCheckComponentTest.test : : Error Message: : List size mismatch @ spellcheck/suggestions : : Stack Trace: : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions : at __randomizedtesting.SeedInfo.seed([110D525A21D16B1:8944EAFF0CE17B49]:0) : at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:837) : at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:784) : at org.apache.solr.handler.component.SpellCheckComponentTest.test(SpellCheckComponentTest.java:96) : at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) : at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) : at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) : at java.lang.reflect.Method.invoke(Method.java:497) : at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) : at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) : at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) : at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) : at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) : at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) : at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) : at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) : at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) : at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) : at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) : at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) : at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) : at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) : at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) : at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) : at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) : at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums
[ https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036870#comment-15036870 ] Gregory Chanan commented on SOLR-8353: -- Hmm, when i don't specify a regex it gets passed to the ant task as the substiution, e.g. "${skipChecksumRegex}" > Support regex for skipping license checksums > > > Key: SOLR-8353 > URL: https://issues.apache.org/jira/browse/SOLR-8353 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Attachments: SOLR-8353.patch, SOLR-8353.patch > > > It would be useful to be able to specify a regex for license checksums to > skip in the build. Currently there are only two supported values: > 1) skipChecksum (i.e. regex=*) > 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*) > A regex would be more flexible and allow testing the entire build while > skipping a more limited set of checksums, e.g.: > a) an individual library (i.e. regex=joda-time*) > b) a suite of libraries (i.e. regex=hadoop*) > We could make skipChecksum and skipSnapshotsChecksum continue to work for > backwards compatbility reasons. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles
[ https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6908: --- Attachment: LUCENE-6908.patch Thanks [~mikemccand]. In {{isClosestPointOnRectWithinRange}} it was still using {{SloppyMath.haversin}} where it should have been using the Sinnott haversine. Looks like it was caused by a premature diff from my benchmark workspace. A quick interesting benchmark re: distance methods. Using 2M iterations I benchmarked average computation time, average spatial error, and maximum spatial error between 5 different geospatial distance methods: * Vincenty * Karney - Modified Vincenty that uses Newton's series expansion to fix problems w/ antipodal points * HaverECF - Same as {{SloppyMath.haversin}} but uses ECEF to accurately compute semi-major axis at the given latitude. * Sinnott's Haversine - Implementation from the original haversine publicatoin * SloppyMath || Distance Method || Avg Computation (ns) || Avg Error (%) || Max Error (%) || | Vincenty | 1286.5529689998828 | 8.4561794778802E-11 | 7.2565793775802E-10 | | Karney | 31770.47920646 | 8.4561794778802E-11 | 7.2565793775802E-10 | | HaverECF | 717.113985152 | 0.18409042301798453 | 0.6681179192384695 | | SloppyMath.haverin | 159.309524995 | 0.22594450803222424 | 0.6539314586426048 | | Sinnott Haversine | 146.8123669738 | 0.18158435835918563 | 0.4931242857748373 | I need to run some better descriptive statistics before drawing any conclusions. At the moment, the discrepancy between {{Sloppymath.haversin}} and Sinnott's implementation is consistent with the error described in the original publication (which is where the tolerance of 0.5% originated). More to come > TestGeoUtils.testGeoRelations is buggy with irregular rectangles > > > Key: LUCENE-6908 > URL: https://issues.apache.org/jira/browse/LUCENE-6908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize > Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, > LUCENE-6908.patch > > > The {{.testGeoRelations}} method doesn't exactly test the behavior of > GeoPoint*Query as its using the BKD split technique (instead of quad cell > division) to divide the space on each pass. For "large" distance queries this > can create a lot of irregular rectangles producing large radial distortion > error when using the cartesian approximation methods provided by > {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian > approximation methods on irregular rectangles without having to cut over to > an expensive oblate geometry approach. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org