Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
I’m +1 to this. We can consider changing our mind once 5.3 is nocking on the door if there are serious problems. On Tue, Jun 9, 2015 at 2:36 PM Erick Erickson erickerick...@gmail.com wrote: Upayavira: I'm a little reluctant to try to port the simpler patch to 5.2.1 as this is all new functionality. I can be argued into it though. It seems that the goal here is to get mileage out of the Angular JS port before making it the default. What do you (and others) think about changing 5.3 to use the angular JS by default for the admin UI? That'll drive lots of usage and help us solidify it for official release without trying to shoehorn a patch into 5.2.1 that's not critical to normal Solr/Lucene functioning. On Tue, Jun 9, 2015 at 9:42 AM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: Yeah, I’ll port that too. On Tue, Jun 9, 2015 at 12:36 PM Karl Wright daddy...@gmail.com wrote: There may be a prerequisite ticket fix that needs pulling up too? r1683532 | dsmiley | 2015-06-04 08:32:45 -0400 (Thu, 04 Jun 2015) | 1 line LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments w ere co-linear Karl On Tue, Jun 9, 2015 at 12:30 PM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: LUCENE-6535 is another one. On Tue, Jun 9, 2015 at 10:57 AM Shalin Shekhar Mangar shalinman...@gmail.com wrote: Thanks Steve! On Tue, Jun 9, 2015 at 7:25 PM, Steve Rowe sar...@gmail.com wrote: On Jun 9, 2015, at 8:57 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: Looks like there are several small fixes that need to be added. I'll cut an RC tomorrow morning India time so that we have enough time to back port these items. I'll also setup a local Jenkins build for 5.2 I’ll re-enable the ASF Jenkins 5.2 jobs now. Steve www.lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6536) Migrate HDFSDirectory from solr to lucene-hadoop
[ https://issues.apache.org/jira/browse/LUCENE-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579400#comment-14579400 ] Greg Bowyer commented on LUCENE-6536: - Oh wow the blur store might be exactly what I am looking for. Migrate HDFSDirectory from solr to lucene-hadoop Key: LUCENE-6536 URL: https://issues.apache.org/jira/browse/LUCENE-6536 Project: Lucene - Core Issue Type: Improvement Reporter: Greg Bowyer Labels: hadoop, hdfs, lucene, solr Attachments: LUCENE-6536.patch I am currently working on a search engine that is throughput orientated and works entirely in apache-spark. As part of this, I need a directory implementation that can operate on HDFS directly. This got me thinking, can I take the one that was worked on so hard for solr hadoop. As such I migrated the HDFS and blockcache directories out to a lucene-hadoop module. Having done this work, I am not sure if it is actually a good change, it feels a bit messy, and I dont like how the Metrics class gets extended and abused. Thoughts anyone -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE
[ https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579334#comment-14579334 ] Karl Wright commented on LUCENE-6520: - The fix for LUCENE-6535 makes no sense without this ticket. I think basically it would be hard to do this without going the whole way to the new code base. Geo3D GeoPath: co-linear end-points result in NPE - Key: LUCENE-6520 URL: https://issues.apache.org/jira/browse/LUCENE-6520 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6520.patch FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]} {noformat} Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} [~daddywri] says: bq. This is happening because the endpoints that define two path segments are co-linear. There's a check for that too, but clearly it's not firing properly in this case for some reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
I'd say, let's get this release out - I'm happy with either patch getting in. Then, let's see the flow of bug reports we get. That'll help us understand how stable (or otherwise) it is, and thus whether we should be shooting for 5.3 or 5.4. Upayavira On Tue, Jun 9, 2015, at 07:07 PM, Erick Erickson wrote: Upayavira: I'm a little reluctant to try to port the simpler patch to 5.2.1 as this is all new functionality. I can be argued into it though. It seems that the goal here is to get mileage out of the Angular JS port before making it the default. What do you (and others) think about changing 5.3 to use the angular JS by default for the admin UI? That'll drive lots of usage and help us solidify it for official release without trying to shoehorn a patch into 5.2.1 that's not critical to normal Solr/Lucene functioning. On Tue, Jun 9, 2015 at 9:42 AM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: Yeah, I’ll port that too. On Tue, Jun 9, 2015 at 12:36 PM Karl Wright daddy...@gmail.com wrote: There may be a prerequisite ticket fix that needs pulling up too? r1683532 | dsmiley | 2015-06-04 08:32:45 -0400 (Thu, 04 Jun 2015) | 1 line LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments w ere co-linear Karl On Tue, Jun 9, 2015 at 12:30 PM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: LUCENE-6535 is another one. On Tue, Jun 9, 2015 at 10:57 AM Shalin Shekhar Mangar shalinman...@gmail.com wrote: Thanks Steve! On Tue, Jun 9, 2015 at 7:25 PM, Steve Rowe sar...@gmail.com wrote: On Jun 9, 2015, at 8:57 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: Looks like there are several small fixes that need to be added. I'll cut an RC tomorrow morning India time so that we have enough time to back port these items. I'll also setup a local Jenkins build for 5.2 I’ll re-enable the ASF Jenkins 5.2 jobs now. Steve www.lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5954) Store lucene version in segment_N
[ https://issues.apache.org/jira/browse/LUCENE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579288#comment-14579288 ] Ryan Ernst commented on LUCENE-5954: +1 to the new patch. I think the inline comment about {{// use safe maps}} on the {{VERSION_53}} constant can be removed? Seems like a copy/paste issue from the previous comment? Store lucene version in segment_N - Key: LUCENE-5954 URL: https://issues.apache.org/jira/browse/LUCENE-5954 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-5954.patch, LUCENE-5954.patch It would be nice to have the version of lucene that wrote segments_N, so that we can use this to determine which major version an index was written with (for upgrading across major versions). I think this could be squeezed in just after the segments_N header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE
[ https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579304#comment-14579304 ] David Smiley commented on LUCENE-6520: -- Woops; I started back-porting this but then realized that this bug is related to the WGS84 feature (BNGS-6487) which is in 5.3. I'll revert my commits to CHANGES.tx. Geo3D GeoPath: co-linear end-points result in NPE - Key: LUCENE-6520 URL: https://issues.apache.org/jira/browse/LUCENE-6520 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6520.patch FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]} {noformat} Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} [~daddywri] says: bq. This is happening because the endpoints that define two path segments are co-linear. There's a check for that too, but clearly it's not firing properly in this case for some reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579398#comment-14579398 ] Erick Erickson commented on SOLR-7638: -- OK, [~ehatcher], I'll leave it to you then. Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Priority: Minor Attachments: SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE
[ https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579269#comment-14579269 ] ASF subversion and git services commented on LUCENE-6520: - Commit 1684483 from [~dsmiley] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684483 ] LUCENE-6520: back-port to 5.2.1 Geo3D GeoPath: co-linear end-points result in NPE - Key: LUCENE-6520 URL: https://issues.apache.org/jira/browse/LUCENE-6520 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6520.patch FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]} {noformat} Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} [~daddywri] says: bq. This is happening because the endpoints that define two path segments are co-linear. There's a check for that too, but clearly it's not firing properly in this case for some reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6531) Make PhraseQuery immutable
[ https://issues.apache.org/jira/browse/LUCENE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6531: - Attachment: (was: LUCENE-6531.patch) Make PhraseQuery immutable -- Key: LUCENE-6531 URL: https://issues.apache.org/jira/browse/LUCENE-6531 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 6.0 Attachments: LUCENE-6531.patch, LUCENE-6531.patch Mutable queries are an issue for automatic filter caching since modifying a query after it has been put into the cache will corrupt the cache. We should make all queries immutable (up to the boost) to avoid this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: BadApple Nightly?
don't have such persistent issues on non nightly runs. Is it possible to ignore specified tests as BadApples or AwaitsFix but only for nightly runs? Sure, that's what test group filtering was added for. It's interesting that test-help doesn't show anything... ES has a more verbose info: You can also filter tests by certain annotations ie: * `@Slow` - tests that are know to take a long time to execute * `@Nightly` - tests that only run in nightly builds (disabled by default) * `@Integration` - integration tests * `@Backwards` - backwards compatibility tests (disabled by default) * `@AwaitsFix` - tests that are waiting for a bugfix (disabled by default) * `@BadApple` - tests that are known to fail randomly (disabled by default) Those annotation names can be combined into a filter expression like: mvn test -Dtests.filter=@nightly and not @slow to run all nightly test but not the ones that are slow. `tests.filter` supports the boolean operators `and, or, not` and grouping ie: --- mvn test -Dtests.filter=@nightly and not(@slow or @backwards) --- The same works for Lucene (-Dtests.filter=...), try it. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
Upayavira: I'm a little reluctant to try to port the simpler patch to 5.2.1 as this is all new functionality. I can be argued into it though. It seems that the goal here is to get mileage out of the Angular JS port before making it the default. What do you (and others) think about changing 5.3 to use the angular JS by default for the admin UI? That'll drive lots of usage and help us solidify it for official release without trying to shoehorn a patch into 5.2.1 that's not critical to normal Solr/Lucene functioning. On Tue, Jun 9, 2015 at 9:42 AM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: Yeah, I’ll port that too. On Tue, Jun 9, 2015 at 12:36 PM Karl Wright daddy...@gmail.com wrote: There may be a prerequisite ticket fix that needs pulling up too? r1683532 | dsmiley | 2015-06-04 08:32:45 -0400 (Thu, 04 Jun 2015) | 1 line LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments w ere co-linear Karl On Tue, Jun 9, 2015 at 12:30 PM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: LUCENE-6535 is another one. On Tue, Jun 9, 2015 at 10:57 AM Shalin Shekhar Mangar shalinman...@gmail.com wrote: Thanks Steve! On Tue, Jun 9, 2015 at 7:25 PM, Steve Rowe sar...@gmail.com wrote: On Jun 9, 2015, at 8:57 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: Looks like there are several small fixes that need to be added. I'll cut an RC tomorrow morning India time so that we have enough time to back port these items. I'll also setup a local Jenkins build for 5.2 I’ll re-enable the ASF Jenkins 5.2 jobs now. Steve www.lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6529) NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues
[ https://issues.apache.org/jira/browse/LUCENE-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated LUCENE-6529: - Attachment: LUCENE-6529.patch updated patch that remoes the DocTermOrds so it always uses OrdWrappedTermsEnum instead of conditionally using it based on the underlying reader. patch still contains nocommits about increasing randomness in the tests -- i'm going to let my machine hammer on this a bit, then i plan to resolve those remaining test tweaks and commit to trunk later today. NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues - Key: LUCENE-6529 URL: https://issues.apache.org/jira/browse/LUCENE-6529 Project: Lucene - Core Issue Type: Bug Reporter: Hoss Man Attachments: LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch Digging into SOLR-7631 and SOLR-7605 I became fairly confident that the only explanation of the behavior i was seeing was some sort of bug in either the randomized codec/postings-format or the UninvertedReader, that was only evident when two were combined and used on a multivalued Numeric Field using precision steps. But since i couldn't find any -Dtests.codec or -Dtests.postings.format options that would cause the bug 100% regardless of seed, I switched tactices and focused on reproducing the problem using UninvertedReader directly and checking the SortedSetDocValues.getValueCount(). I now have a test that fails frequently (and consistently for any seed i find), but only with -Dtests.codec=random -- override it with -Dtests.codec=default and everything works fine (based on the exhaustive testing I did in the linked issues, i suspect every named codec works fine - but i didn't re-do that testing here) The failures only seem to happen when checking the SortedSetDocValues.getValueCount() of a SlowCompositeReaderWrapper around the UninvertedReader -- which suggests the root bug may actually be in SlowCompositeReaderWrapper? (but still has some dependency on the random codec) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6538) Improve per-segment diagnostics for IBM J9 JVM
[ https://issues.apache.org/jira/browse/LUCENE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6538: --- Attachment: LUCENE-6538.patch Here's a patch, I just blindly add those two sysprops, but I don't yet have a working J9 to confirm these produce better results than what we add to diagnostics now ... Improve per-segment diagnostics for IBM J9 JVM -- Key: LUCENE-6538 URL: https://issues.apache.org/jira/browse/LUCENE-6538 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6538.patch Spinoff from http://lucene.markmail.org/thread/dq4wioomu4o346ej where I noticed that the per-segment diagnostics (seen from CheckIndex) only report 1.7.0 as the JVM version, without any update level. Talking to [~rcmuir] it looks like we just need to add java.vm.version and java.runtime.version sysprops into the diagnostics. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool
[ https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-7512: -- Attachment: SOLR-7512.patch SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool -- Key: SOLR-7512 URL: https://issues.apache.org/jira/browse/SOLR-7512 Project: Solr Issue Type: Bug Components: contrib - MapReduce Affects Versions: 5.1 Reporter: Adam McElwee Assignee: Mark Miller Priority: Blocker Fix For: Trunk, 5.2.1 Attachments: SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because invalid `solr.xml` contents were being written to the solr home dir zip. My guess is that a 5.0 change made the invalid file start to matter. The error manifests as: {code:java} Error: java.lang.IllegalStateException: Failed to initialize record writer for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, attempt_1430953999892_0012_r_01_1 at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126) at org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170) Caused by: org.apache.solr.common.SolrException: org.xml.sax.SAXParseException; Premature end of file. at org.apache.solr.core.Config.init(Config.java:156) at org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127) at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110) at org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138) at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142) at org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162) at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119) ... 9 more Caused by: org.xml.sax.SAXParseException; Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.init(Config.java:145) ... 15 more {code} The last version that I've successfully used `MapReduceIndexerTool` was 4.9, and I verified that this patch resolves the issue for me (testing on 5.1). I spent a couple hours trying to write a simple test case to exhibit the error, but I haven't quite figured out how to deal with the {noformat}java.security.AccessControlException: java.io.FilePermission ...{noformat} errors. Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6537) Make NearSpansOrdered use lazy iteration
[ https://issues.apache.org/jira/browse/LUCENE-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579282#comment-14579282 ] Michael McCandless commented on LUCENE-6537: If it's expected that scores would have changed, since we don't test for that but luceneutil does catch it, then we can ignore it when benching (I think there is an option per competition to do so). bq. I'm not sure what the point of doing the score-grouping is though? It seems a pretty arbitrary thing to be checking? I think the idea here was not to fail if the docIDs different in sort order when they had identical scores, maybe ... Make NearSpansOrdered use lazy iteration Key: LUCENE-6537 URL: https://issues.apache.org/jira/browse/LUCENE-6537 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Priority: Minor Attachments: LUCENE-6537.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE
[ https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579306#comment-14579306 ] ASF subversion and git services commented on LUCENE-6520: - Commit 1684485 from [~dsmiley] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684485 ] LUCENE-6520: woops; revert attempted back-port (only CHANGES.txt) Geo3D GeoPath: co-linear end-points result in NPE - Key: LUCENE-6520 URL: https://issues.apache.org/jira/browse/LUCENE-6520 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6520.patch FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]} {noformat} Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} [~daddywri] says: bq. This is happening because the endpoints that define two path segments are co-linear. There's a check for that too, but clearly it's not firing properly in this case for some reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2354 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2354/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Error from server at http://127.0.0.1:61558//collection1: java.lang.NullPointerException at org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102) at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:744) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:727) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:388) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2057) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:648) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:61558//collection1: java.lang.NullPointerException at org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102) at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:744) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:727) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:388) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2057) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:648) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at
[jira] [Commented] (LUCENE-6535) Geo3D test failure, June 6th
[ https://issues.apache.org/jira/browse/LUCENE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579337#comment-14579337 ] ASF subversion and git services commented on LUCENE-6535: - Commit 1684492 from [~dsmiley] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684492 ] LUCENE-6535: Fix Geo3D bug in LUCENE-6520 Geo3D test failure, June 6th Key: LUCENE-6535 URL: https://issues.apache.org/jira/browse/LUCENE-6535 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6535.patch This reproduces: {noformat} Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12789/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#5 seed=[ADFCC7193C72FA89:9BDCDB8859624E4]} Error Message: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} Stack Trace: java.lang.AssertionError: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} at __randomizedtesting.SeedInfo.seed([ADFCC7193C72FA89:9BDCDB8859624E4]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:116) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579335#comment-14579335 ] Upayavira commented on SOLR-7638: - [~erikhatcher] has already volunteered. And, I can't assign tickets :-( Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Priority: Minor Attachments: SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6535) Geo3D test failure, June 6th
[ https://issues.apache.org/jira/browse/LUCENE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-6535. -- Resolution: Fixed Fix Version/s: (was: 5.2.1) 5.3 Thanks for the fix Karl. Geo3D test failure, June 6th Key: LUCENE-6535 URL: https://issues.apache.org/jira/browse/LUCENE-6535 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6535.patch This reproduces: {noformat} Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12789/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#5 seed=[ADFCC7193C72FA89:9BDCDB8859624E4]} Error Message: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} Stack Trace: java.lang.AssertionError: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} at __randomizedtesting.SeedInfo.seed([ADFCC7193C72FA89:9BDCDB8859624E4]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:116) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6536) Migrate HDFSDirectory from solr to lucene-hadoop
[ https://issues.apache.org/jira/browse/LUCENE-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579338#comment-14579338 ] Mark Miller commented on LUCENE-6536: - bq. is it the directory or the blockcache that is the source of most of the corruptions There are two issues: * The write side of the block cache is buggy and can corrupt indexes - I don't think it provides any value anyway so it should just be cut out - currently it's turned off. * The hdfs directory doesn't do a classic fsync - to get this kind of behavior you have to write files to hdfs in some really slow mode I believe - it doesn't have an API compatible with how Lucene fsyncs. All and all the block cache performance is good enough for a ton of use cases, but the overall approach and management of it is not great. The Apache Blur project has made a better version that is better for even more uses cases, but it requires Unsafe usage for direct memory access. Migrate HDFSDirectory from solr to lucene-hadoop Key: LUCENE-6536 URL: https://issues.apache.org/jira/browse/LUCENE-6536 Project: Lucene - Core Issue Type: Improvement Reporter: Greg Bowyer Labels: hadoop, hdfs, lucene, solr Attachments: LUCENE-6536.patch I am currently working on a search engine that is throughput orientated and works entirely in apache-spark. As part of this, I need a directory implementation that can operate on HDFS directly. This got me thinking, can I take the one that was worked on so hard for solr hadoop. As such I migrated the HDFS and blockcache directories out to a lucene-hadoop module. Having done this work, I am not sure if it is actually a good change, it feels a bit messy, and I dont like how the Metrics class gets extended and abused. Thoughts anyone -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE
[ https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579339#comment-14579339 ] ASF subversion and git services commented on LUCENE-6520: - Commit 1684492 from [~dsmiley] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684492 ] LUCENE-6535: Fix Geo3D bug in LUCENE-6520 Geo3D GeoPath: co-linear end-points result in NPE - Key: LUCENE-6520 URL: https://issues.apache.org/jira/browse/LUCENE-6520 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley Fix For: 5.3 Attachments: LUCENE-6520.patch FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]} {noformat} Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480) at org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} [~daddywri] says: bq. This is happening because the endpoints that define two path segments are co-linear. There's a check for that too, but clearly it's not firing properly in this case for some reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6531) Make PhraseQuery immutable
[ https://issues.apache.org/jira/browse/LUCENE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6531: - Attachment: LUCENE-6531.patch Make PhraseQuery immutable -- Key: LUCENE-6531 URL: https://issues.apache.org/jira/browse/LUCENE-6531 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 6.0 Attachments: LUCENE-6531.patch, LUCENE-6531.patch Mutable queries are an issue for automatic filter caching since modifying a query after it has been put into the cache will corrupt the cache. We should make all queries immutable (up to the boost) to avoid this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool
[ https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579362#comment-14579362 ] ASF subversion and git services commented on SOLR-7512: --- Commit 1684495 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684495 ] SOLR-7512: SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool. SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool -- Key: SOLR-7512 URL: https://issues.apache.org/jira/browse/SOLR-7512 Project: Solr Issue Type: Bug Components: contrib - MapReduce Affects Versions: 5.1 Reporter: Adam McElwee Assignee: Mark Miller Priority: Blocker Fix For: Trunk, 5.2.1 Attachments: SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because invalid `solr.xml` contents were being written to the solr home dir zip. My guess is that a 5.0 change made the invalid file start to matter. The error manifests as: {code:java} Error: java.lang.IllegalStateException: Failed to initialize record writer for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, attempt_1430953999892_0012_r_01_1 at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126) at org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170) Caused by: org.apache.solr.common.SolrException: org.xml.sax.SAXParseException; Premature end of file. at org.apache.solr.core.Config.init(Config.java:156) at org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127) at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110) at org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138) at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142) at org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162) at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119) ... 9 more Caused by: org.xml.sax.SAXParseException; Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.init(Config.java:145) ... 15 more {code} The last version that I've successfully used `MapReduceIndexerTool` was 4.9, and I verified that this patch resolves the issue for me (testing on 5.1). I spent a couple hours trying to write a simple test case to exhibit the error, but I haven't quite figured out how to deal with the {noformat}java.security.AccessControlException: java.io.FilePermission ...{noformat} errors. Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5855) re-use document term-vector Fields instance across fields in the DefaultSolrHighlighter
[ https://issues.apache.org/jira/browse/SOLR-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579443#comment-14579443 ] David Smiley commented on SOLR-5855: I was initially skeptical the stack traces would show anything of interest but I am pleasantly mistaken. Apparently, getting the FieldInfos from SlowCompositeReaderWrapper is a bottleneck. We look this up to determine if there are payloads or not, so that we can then tell MemoryIndex to capture them as well. FYI the call to get this was added recently in SOLR-6916 (Highlighting using payloads), its not related to term vectors -- this issue. Can you please download the 5x branch, comment out the {{scorer.getUsePayloads(...}} line (or set it to true if you want), and see how it performs? re-use document term-vector Fields instance across fields in the DefaultSolrHighlighter --- Key: SOLR-5855 URL: https://issues.apache.org/jira/browse/SOLR-5855 Project: Solr Issue Type: Improvement Components: highlighter Affects Versions: Trunk Reporter: Daniel Debray Assignee: David Smiley Fix For: 5.2 Attachments: SOLR-5855-without-cache.patch, SOLR-5855_with_FVH_support.patch, SOLR-5855_with_FVH_support.patch, highlight.patch Hi folks, while investigating possible performance bottlenecks in the highlight component i discovered two places where we can save some cpu cylces. Both are in the class org.apache.solr.highlight.DefaultSolrHighlighter First in method doHighlighting (lines 411-417): In the loop we try to highlight every field that has been resolved from the params on each document. Ok, but why not skip those fields that are not present on the current document? So i changed the code from: for (String fieldName : fieldNames) { fieldName = fieldName.trim(); if( useFastVectorHighlighter( params, schema, fieldName ) ) doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, docSummaries, docId, doc, fieldName ); else doHighlightingByHighlighter( query, req, docSummaries, docId, doc, fieldName ); } to: for (String fieldName : fieldNames) { fieldName = fieldName.trim(); if (doc.get(fieldName) != null) { if( useFastVectorHighlighter( params, schema, fieldName ) ) doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, docSummaries, docId, doc, fieldName ); else doHighlightingByHighlighter( query, req, docSummaries, docId, doc, fieldName ); } } The second place is where we try to retrieve the TokenStream from the document for a specific field. line 472: TokenStream tvStream = TokenSources.getTokenStreamWithOffsets(searcher.getIndexReader(), docId, fieldName); where.. public static TokenStream getTokenStreamWithOffsets(IndexReader reader, int docId, String field) throws IOException { Fields vectors = reader.getTermVectors(docId); if (vectors == null) { return null; } Terms vector = vectors.terms(field); if (vector == null) { return null; } if (!vector.hasPositions() || !vector.hasOffsets()) { return null; } return getTokenStream(vector); } keep in mind that we currently hit the IndexReader n times where n = requested rows(documents) * requested amount of highlight fields. in my usecase reader.getTermVectors(docId) takes around 150.000~250.000ns on a warm solr and 1.100.000ns on a cold solr. If we store the returning Fields vectors in a cache, this lookups only take 25000ns. I would suggest something like the following code in the doHighlightingByHighlighter method in the DefaultSolrHighlighter class (line 472): Fields vectors = null; SolrCache termVectorCache = searcher.getCache(termVectorCache); if (termVectorCache != null) { vectors = (Fields) termVectorCache.get(Integer.valueOf(docId)); if (vectors == null) { vectors = searcher.getIndexReader().getTermVectors(docId); if (vectors != null) termVectorCache.put(Integer.valueOf(docId), vectors); } } else { vectors = searcher.getIndexReader().getTermVectors(docId); } TokenStream tvStream = TokenSources.getTokenStreamWithOffsets(vectors, fieldName); and TokenSources class: public static TokenStream getTokenStreamWithOffsets(Fields vectors, String field) throws IOException { if (vectors == null) { return null; } Terms vector = vectors.terms(field); if (vector == null) { return null; } if (!vector.hasPositions() || !vector.hasOffsets()) { return null; } return getTokenStream(vector); } 4000ms on 1000 docs without cache 639ms on 1000 docs with cache 102ms on 30 docs without cache 22ms on 30 docs with cache on an index with 190.000 docs with a numFound of 32000 and 80 different
[jira] [Assigned] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher reassigned SOLR-7638: -- Assignee: Erik Hatcher Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7655) Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos
[ https://issues.apache.org/jira/browse/SOLR-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579476#comment-14579476 ] David Smiley commented on SOLR-7655: Suggested fix: {code:java} try { scorer.setUsePayloads(request.getParams().getFieldBool(fieldName, HighlightParams.PAYLOADS, request.getSearcher().getLeafReader().fields().terms(fieldName).hasPayloads())); // It'd be nice to know if payloads are on the tokenStream but the presence of the attribute isn't a good // indicator. } catch (IOException e) { throw new RuntimeException(e); } {code} I'm going to try this now with Solr's tests, then post a patch. Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos Key: SOLR-7655 URL: https://issues.apache.org/jira/browse/SOLR-7655 Project: Solr Issue Type: Bug Components: highlighter Affects Versions: 5.0 Reporter: David Smiley Assignee: David Smiley It appears grabbing the FieldInfos from the SlowCompositeReaderWrapper is slow. It isn't cached. The DefaultSolrHighligher in SOLR-6196 (v5.0) uses it to ascertain if there are payloads. Instead it can grab it from the Terms instance, which is cached. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
A user found a nasty DefaultSolrHighlighter performance bug with an easy fix — https://issues.apache.org/jira/browse/SOLR-7655 I’m running tests now. On Tue, Jun 9, 2015 at 2:48 PM david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: I’m +1 to this. We can consider changing our mind once 5.3 is nocking on the door if there are serious problems. On Tue, Jun 9, 2015 at 2:36 PM Erick Erickson erickerick...@gmail.com wrote: Upayavira: I'm a little reluctant to try to port the simpler patch to 5.2.1 as this is all new functionality. I can be argued into it though. It seems that the goal here is to get mileage out of the Angular JS port before making it the default. What do you (and others) think about changing 5.3 to use the angular JS by default for the admin UI? That'll drive lots of usage and help us solidify it for official release without trying to shoehorn a patch into 5.2.1 that's not critical to normal Solr/Lucene functioning. On Tue, Jun 9, 2015 at 9:42 AM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: Yeah, I’ll port that too. On Tue, Jun 9, 2015 at 12:36 PM Karl Wright daddy...@gmail.com wrote: There may be a prerequisite ticket fix that needs pulling up too? r1683532 | dsmiley | 2015-06-04 08:32:45 -0400 (Thu, 04 Jun 2015) | 1 line LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments w ere co-linear Karl On Tue, Jun 9, 2015 at 12:30 PM, david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: LUCENE-6535 is another one. On Tue, Jun 9, 2015 at 10:57 AM Shalin Shekhar Mangar shalinman...@gmail.com wrote: Thanks Steve! On Tue, Jun 9, 2015 at 7:25 PM, Steve Rowe sar...@gmail.com wrote: On Jun 9, 2015, at 8:57 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: Looks like there are several small fixes that need to be added. I'll cut an RC tomorrow morning India time so that we have enough time to back port these items. I'll also setup a local Jenkins build for 5.2 I’ll re-enable the ASF Jenkins 5.2 jobs now. Steve www.lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6527) TermWeight should not load norms when needsScores is false
[ https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579545#comment-14579545 ] ASF subversion and git services commented on LUCENE-6527: - Commit 1684528 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1684528 ] LUCENE-6527: Fix rare test bug. TermWeight should not load norms when needsScores is false -- Key: LUCENE-6527 URL: https://issues.apache.org/jira/browse/LUCENE-6527 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Fix For: Trunk, 5.2.1 Attachments: LUCENE-6527.patch, LUCENE-6527.patch, LUCENE-6527.patch TermWeight currently loads norms all the time, even when needsScores is false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7570) Config APIs should not modify the ConfigSet
[ https://issues.apache.org/jira/browse/SOLR-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579558#comment-14579558 ] Gregory Chanan commented on SOLR-7570: -- bq. I guess you guys are missing the point here. What I'm suggesting is make the mutable conf location configurable on a per collection basis. By default (if no extra param is passed) it will be /collections/$COLLECTION_NAME/conf . This will enable users to reuse mutable conf too. For example , when I create a collection I can specify mutableconfdir=/collections/commonConfDir/conf and every collection which has the same property will share the same node for mutable configs as far as I understand it, this suggestion is addressing tomas' bullet above: {code} Changes to configsets need a different API, or file upload. If I remember correctly, collections are watching the configset znode, and may be reloaded after a watch is triggered. We should keep this as a way to edit shared configsets, users would for example, upload a new solrconfig.xml and then touch the configset. This should reload all collections using that configset as we do now. {code} i.e. you need a place to share mutable configs. It seems cleaner to have a separate ConfigSet API, i.e. REST calls to, say, /configs/MySharedConfig rather than to alias collection-specific APIs. The later just gets us back to the case we are in now, where collection-specific APIs can result in changes outside the collection. That is confusing IMO. Config APIs should not modify the ConfigSet --- Key: SOLR-7570 URL: https://issues.apache.org/jira/browse/SOLR-7570 Project: Solr Issue Type: Improvement Reporter: Tomás Fernández Löbbe Attachments: SOLR-7570.patch Originally discussed here: http://mail-archives.apache.org/mod_mbox/lucene-dev/201505.mbox/%3CCAMJgJxSXCHxDzJs5-C-pKFDEBQD6JbgxB=-xp7u143ekmgp...@mail.gmail.com%3E The ConfigSet used to create a collection should be read-only. Changes made via any of the Config APIs should only be applied to the collection where the operation is done and no to other collections that may be using the same ConfigSet. As discussed in the dev list: When a collection is created we should have two things, an immutable part (the ConfigSet) and a mutable part (configoverlay, generated schema, etc). The ConfigSet will still be placed in ZooKeeper under /configs but the mutable part should be placed under /collections/$COLLECTION_NAME/… [~romseygeek] suggested: {quote} A nice way of doing it would be to make it part of the SolrResourceLoader interface. The ZK resource loader could check in the collection-specific zknode first, and then under configs/, and we could add a writeResource() method that writes to the collection-specific node as well. Then all config I/O goes via the resource loader, and we have a way of keeping certain parts immutable. {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7656) maxTotalChars param not respected in LangDetect language detection implementation
Derek Wood created SOLR-7656: Summary: maxTotalChars param not respected in LangDetect language detection implementation Key: SOLR-7656 URL: https://issues.apache.org/jira/browse/SOLR-7656 Project: Solr Issue Type: Bug Components: contrib - LangId Affects Versions: 5.2, Trunk Reporter: Derek Wood Priority: Minor The LangDetect wrapper code incorrectly uses the maxTotalChars param [1] to configure the max field length in the LangDetect library [2] [1] https://svn.apache.org/viewvc/lucene/dev/trunk/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java?annotate=1643377#l51 [2] https://github.com/shuyo/language-detection/blob/master/src/com/cybozu/labs/langdetect/Detector.java#L170 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579489#comment-14579489 ] ASF subversion and git services commented on SOLR-7652: --- Commit 1684511 from [~ehatcher] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1684511 ] SOLR-7652: Fix example/files update-script.js to work with Java 7 (merged from branch_5x r1684510) example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.2.1 Attachments: SOLR-7652.patch A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6537) Make NearSpansOrdered use lazy iteration
[ https://issues.apache.org/jira/browse/LUCENE-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Elschot updated LUCENE-6537: - Attachment: LUCENE-6537.patch Same patch with two extra test methods showing that repeated matches occur only when the first term repeats and there is enough slop: For ordered span near t1 t2 with slop 1: t1 t1 t2 matches twice, t1 t2 t2 matches once. Make NearSpansOrdered use lazy iteration Key: LUCENE-6537 URL: https://issues.apache.org/jira/browse/LUCENE-6537 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Priority: Minor Attachments: LUCENE-6537.patch, LUCENE-6537.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579564#comment-14579564 ] Erik Hatcher commented on SOLR-7638: [~upayavira] So the aforementioned one bug away from adding paging to the radial graph also is a separate issue not making it to 5.2.1? Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7655) Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos
David Smiley created SOLR-7655: -- Summary: Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos Key: SOLR-7655 URL: https://issues.apache.org/jira/browse/SOLR-7655 Project: Solr Issue Type: Bug Components: highlighter Affects Versions: 5.0 Reporter: David Smiley Assignee: David Smiley It appears grabbing the FieldInfos from the SlowCompositeReaderWrapper is slow. It isn't cached. The DefaultSolrHighligher in SOLR-6196 (v5.0) uses it to ascertain if there are payloads. Instead it can grab it from the Terms instance, which is cached. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579475#comment-14579475 ] Erik Hatcher commented on SOLR-7638: [~upayavira] - I'll stay tuned here and test and commit whatever you can get posted by tonight. Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5855) re-use document term-vector Fields instance across fields in the DefaultSolrHighlighter
[ https://issues.apache.org/jira/browse/SOLR-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579480#comment-14579480 ] David Smiley commented on SOLR-5855: [~emaijala] I created an issue for this; please discuss further there: SOLR-7655 re-use document term-vector Fields instance across fields in the DefaultSolrHighlighter --- Key: SOLR-5855 URL: https://issues.apache.org/jira/browse/SOLR-5855 Project: Solr Issue Type: Improvement Components: highlighter Affects Versions: Trunk Reporter: Daniel Debray Assignee: David Smiley Fix For: 5.2 Attachments: SOLR-5855-without-cache.patch, SOLR-5855_with_FVH_support.patch, SOLR-5855_with_FVH_support.patch, highlight.patch Hi folks, while investigating possible performance bottlenecks in the highlight component i discovered two places where we can save some cpu cylces. Both are in the class org.apache.solr.highlight.DefaultSolrHighlighter First in method doHighlighting (lines 411-417): In the loop we try to highlight every field that has been resolved from the params on each document. Ok, but why not skip those fields that are not present on the current document? So i changed the code from: for (String fieldName : fieldNames) { fieldName = fieldName.trim(); if( useFastVectorHighlighter( params, schema, fieldName ) ) doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, docSummaries, docId, doc, fieldName ); else doHighlightingByHighlighter( query, req, docSummaries, docId, doc, fieldName ); } to: for (String fieldName : fieldNames) { fieldName = fieldName.trim(); if (doc.get(fieldName) != null) { if( useFastVectorHighlighter( params, schema, fieldName ) ) doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, docSummaries, docId, doc, fieldName ); else doHighlightingByHighlighter( query, req, docSummaries, docId, doc, fieldName ); } } The second place is where we try to retrieve the TokenStream from the document for a specific field. line 472: TokenStream tvStream = TokenSources.getTokenStreamWithOffsets(searcher.getIndexReader(), docId, fieldName); where.. public static TokenStream getTokenStreamWithOffsets(IndexReader reader, int docId, String field) throws IOException { Fields vectors = reader.getTermVectors(docId); if (vectors == null) { return null; } Terms vector = vectors.terms(field); if (vector == null) { return null; } if (!vector.hasPositions() || !vector.hasOffsets()) { return null; } return getTokenStream(vector); } keep in mind that we currently hit the IndexReader n times where n = requested rows(documents) * requested amount of highlight fields. in my usecase reader.getTermVectors(docId) takes around 150.000~250.000ns on a warm solr and 1.100.000ns on a cold solr. If we store the returning Fields vectors in a cache, this lookups only take 25000ns. I would suggest something like the following code in the doHighlightingByHighlighter method in the DefaultSolrHighlighter class (line 472): Fields vectors = null; SolrCache termVectorCache = searcher.getCache(termVectorCache); if (termVectorCache != null) { vectors = (Fields) termVectorCache.get(Integer.valueOf(docId)); if (vectors == null) { vectors = searcher.getIndexReader().getTermVectors(docId); if (vectors != null) termVectorCache.put(Integer.valueOf(docId), vectors); } } else { vectors = searcher.getIndexReader().getTermVectors(docId); } TokenStream tvStream = TokenSources.getTokenStreamWithOffsets(vectors, fieldName); and TokenSources class: public static TokenStream getTokenStreamWithOffsets(Fields vectors, String field) throws IOException { if (vectors == null) { return null; } Terms vector = vectors.terms(field); if (vector == null) { return null; } if (!vector.hasPositions() || !vector.hasOffsets()) { return null; } return getTokenStream(vector); } 4000ms on 1000 docs without cache 639ms on 1000 docs with cache 102ms on 30 docs without cache 22ms on 30 docs with cache on an index with 190.000 docs with a numFound of 32000 and 80 different highlight fields. I think querys with only one field to highlight on a document does not benefit that much from a cache like this, thats why i think an optional cache would be the best solution there. As i saw the FastVectorHighlighter uses more or less the same approach and could also benefit from this cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
[jira] [Commented] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579483#comment-14579483 ] ASF subversion and git services commented on SOLR-7652: --- Commit 1684510 from [~ehatcher] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684510 ] SOLR-7652: Fix example/files update-script.js to work with Java 7 example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.2.1 Attachments: SOLR-7652.patch A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-7652: --- Fix Version/s: 5.3 example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.3, 5.2.1 Attachments: SOLR-7652.patch A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
[ https://issues.apache.org/jira/browse/LUCENE-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6539: --- Attachment: LUCENE-6539.patch Initial rough patch ... test is passing. Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values --- Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6539.patch This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
[ https://issues.apache.org/jira/browse/LUCENE-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579566#comment-14579566 ] Adrien Grand commented on LUCENE-6539: -- This new query looks good to me. However instead of keeping adding such queries to core, I think we should consider moving all our doc values queries to misc since they have complicated trade-offs and are only useful in expert use-cases? {code} + private static SetLong toSet(Long[] array) { +SetLong numbers = new HashSet(); +for(Long number : array) { + numbers.add(number); +} +return numbers; + } {code} FYI you don't need this helper and could do just: {{new HashSetLong(Arrays.asList(array))}}. bq. in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery This comment got me confused: I think in general these queries are more efficient when they match _many_ documents, ie. even when an equivalent TermsQuery would not be used as a lead iterator in a conjunction? I think the only case when such a query matching few documents would be useful would be in a prohibited clause since these prohibited clauses can never be used to lead iteration anyway and are only used in a random-access fashion? Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values --- Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6539.patch This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: BadApple Nightly?
Thanks! On Tue, Jun 9, 2015 at 2:28 PM Dawid Weiss dawid.we...@gmail.com wrote: don't have such persistent issues on non nightly runs. Is it possible to ignore specified tests as BadApples or AwaitsFix but only for nightly runs? Sure, that's what test group filtering was added for. It's interesting that test-help doesn't show anything... ES has a more verbose info: You can also filter tests by certain annotations ie: * `@Slow` - tests that are know to take a long time to execute * `@Nightly` - tests that only run in nightly builds (disabled by default) * `@Integration` - integration tests * `@Backwards` - backwards compatibility tests (disabled by default) * `@AwaitsFix` - tests that are waiting for a bugfix (disabled by default) * `@BadApple` - tests that are known to fail randomly (disabled by default) Those annotation names can be combined into a filter expression like: mvn test -Dtests.filter=@nightly and not @slow to run all nightly test but not the ones that are slow. `tests.filter` supports the boolean operators `and, or, not` and grouping ie: --- mvn test -Dtests.filter=@nightly and not(@slow or @backwards) --- The same works for Lucene (-Dtests.filter=...), try it. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- - Mark about.me/markrmiller
[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_45) - Build # 4789 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4789/ Java: 32bit/jdk1.8.0_45 -client -XX:+UseSerialGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ERROR: SolrIndexSearcher opens=51 closes=50 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50 at __randomizedtesting.SeedInfo.seed([33E05A004348F1C5]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:472) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=12284, name=searcherExecutor-5490-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=12284, name=searcherExecutor-5490-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([33E05A004348F1C5]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=12284,
[jira] [Resolved] (LUCENE-5954) Store lucene version in segment_N
[ https://issues.apache.org/jira/browse/LUCENE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-5954. Resolution: Fixed Store lucene version in segment_N - Key: LUCENE-5954 URL: https://issues.apache.org/jira/browse/LUCENE-5954 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-5954.patch, LUCENE-5954.patch It would be nice to have the version of lucene that wrote segments_N, so that we can use this to determine which major version an index was written with (for upgrading across major versions). I think this could be squeezed in just after the segments_N header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
[ https://issues.apache.org/jira/browse/LUCENE-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579568#comment-14579568 ] Robert Muir commented on LUCENE-6539: - I don't think this query should be a standalone one. It forces users to decide which one to use, and they will fuck this up. every time. Its ok in current form to go to sandbox, but i think this needs to be integrated into the inverted approach so that based on index stats, lucene can just do the right thing. Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values --- Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6539.patch This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6527) TermWeight should not load norms when needsScores is false
[ https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579447#comment-14579447 ] ASF subversion and git services commented on LUCENE-6527: - Commit 1684506 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684506 ] LUCENE-6527: Queries now get a dummy Similarity when scores are not needed in order to not load unnecessary information like norms. TermWeight should not load norms when needsScores is false -- Key: LUCENE-6527 URL: https://issues.apache.org/jira/browse/LUCENE-6527 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Fix For: Trunk, 5.2.1 Attachments: LUCENE-6527.patch, LUCENE-6527.patch, LUCENE-6527.patch TermWeight currently loads norms all the time, even when needsScores is false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-7638: --- Attachment: Cloud Dump.png Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579474#comment-14579474 ] Erik Hatcher commented on SOLR-7638: I've applied this patch locally on trunk and the cloud view is working now whereas it didn't before, yay! One thing is broken in the new admin Cloud view, the Dump tab. Screenshot attached. No Solr requests are logged when going to the Dump tab. I'm running `bin/solr start -e cloud -noprompt` Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5954) Store lucene version in segment_N
[ https://issues.apache.org/jira/browse/LUCENE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579499#comment-14579499 ] ASF subversion and git services commented on LUCENE-5954: - Commit 1684514 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684514 ] LUCENE-5954: write oldest segment version, and segments_N version, in the segments file Store lucene version in segment_N - Key: LUCENE-5954 URL: https://issues.apache.org/jira/browse/LUCENE-5954 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-5954.patch, LUCENE-5954.patch It would be nice to have the version of lucene that wrote segments_N, so that we can use this to determine which major version an index was written with (for upgrading across major versions). I think this could be squeezed in just after the segments_N header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6538) Improve per-segment diagnostics for IBM J9 JVM
[ https://issues.apache.org/jira/browse/LUCENE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579501#comment-14579501 ] Michael McCandless commented on LUCENE-6538: bq. Why not just this: Oh, duh, much better, I'll fix :) Improve per-segment diagnostics for IBM J9 JVM -- Key: LUCENE-6538 URL: https://issues.apache.org/jira/browse/LUCENE-6538 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6538.patch Spinoff from http://lucene.markmail.org/thread/dq4wioomu4o346ej where I noticed that the per-segment diagnostics (seen from CheckIndex) only report 1.7.0 as the JVM version, without any update level. Talking to [~rcmuir] it looks like we just need to add java.vm.version and java.runtime.version sysprops into the diagnostics. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
Michael McCandless created LUCENE-6539: -- Summary: Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6527) TermWeight should not load norms when needsScores is false
[ https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579547#comment-14579547 ] ASF subversion and git services commented on LUCENE-6527: - Commit 1684530 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1684530 ] LUCENE-6527: Fix rare test bug. TermWeight should not load norms when needsScores is false -- Key: LUCENE-6527 URL: https://issues.apache.org/jira/browse/LUCENE-6527 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Fix For: Trunk, 5.2.1 Attachments: LUCENE-6527.patch, LUCENE-6527.patch, LUCENE-6527.patch TermWeight currently loads norms all the time, even when needsScores is false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6527) TermWeight should not load norms when needsScores is false
[ https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579548#comment-14579548 ] ASF subversion and git services commented on LUCENE-6527: - Commit 1684531 from [~jpountz] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1684531 ] LUCENE-6527: Queries now get a dummy Similarity when scores are not needed in order to not load unnecessary information like norms. TermWeight should not load norms when needsScores is false -- Key: LUCENE-6527 URL: https://issues.apache.org/jira/browse/LUCENE-6527 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Fix For: Trunk, 5.2.1 Attachments: LUCENE-6527.patch, LUCENE-6527.patch, LUCENE-6527.patch TermWeight currently loads norms all the time, even when needsScores is false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6527) TermWeight should not load norms when needsScores is false
[ https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6527. -- Resolution: Fixed TermWeight should not load norms when needsScores is false -- Key: LUCENE-6527 URL: https://issues.apache.org/jira/browse/LUCENE-6527 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Fix For: Trunk, 5.2.1 Attachments: LUCENE-6527.patch, LUCENE-6527.patch, LUCENE-6527.patch TermWeight currently loads norms all the time, even when needsScores is false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7570) Config APIs should not modify the ConfigSet
[ https://issues.apache.org/jira/browse/SOLR-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579558#comment-14579558 ] Gregory Chanan edited comment on SOLR-7570 at 6/9/15 8:59 PM: -- bq. I guess you guys are missing the point here. What I'm suggesting is make the mutable conf location configurable on a per collection basis. By default (if no extra param is passed) it will be /collections/$COLLECTION_NAME/conf . This will enable users to reuse mutable conf too. For example , when I create a collection I can specify mutableconfdir=/collections/commonConfDir/conf and every collection which has the same property will share the same node for mutable configs as far as I understand it, this suggestion is addressing tomas' bullet above: {code} Changes to configsets need a different API, or file upload. If I remember correctly, collections are watching the configset znode, and may be reloaded after a watch is triggered. We should keep this as a way to edit shared configsets, users would for example, upload a new solrconfig.xml and then touch the configset. This should reload all collections using that configset as we do now. {code} i.e. you need a place to share mutable configs. It seems cleaner to have a separate ConfigSet API, i.e. REST calls to, say, /configs/MySharedConfig rather than to alias collection-specific APIs. The later just gets us back to the case we are in now, where collection-specific APIs can result in changes outside the collection. That is confusing IMO. edit: and the configs under /configs/xxx can be mutable or not, as described in SOLR-5955. was (Author: gchanan): bq. I guess you guys are missing the point here. What I'm suggesting is make the mutable conf location configurable on a per collection basis. By default (if no extra param is passed) it will be /collections/$COLLECTION_NAME/conf . This will enable users to reuse mutable conf too. For example , when I create a collection I can specify mutableconfdir=/collections/commonConfDir/conf and every collection which has the same property will share the same node for mutable configs as far as I understand it, this suggestion is addressing tomas' bullet above: {code} Changes to configsets need a different API, or file upload. If I remember correctly, collections are watching the configset znode, and may be reloaded after a watch is triggered. We should keep this as a way to edit shared configsets, users would for example, upload a new solrconfig.xml and then touch the configset. This should reload all collections using that configset as we do now. {code} i.e. you need a place to share mutable configs. It seems cleaner to have a separate ConfigSet API, i.e. REST calls to, say, /configs/MySharedConfig rather than to alias collection-specific APIs. The later just gets us back to the case we are in now, where collection-specific APIs can result in changes outside the collection. That is confusing IMO. Config APIs should not modify the ConfigSet --- Key: SOLR-7570 URL: https://issues.apache.org/jira/browse/SOLR-7570 Project: Solr Issue Type: Improvement Reporter: Tomás Fernández Löbbe Attachments: SOLR-7570.patch Originally discussed here: http://mail-archives.apache.org/mod_mbox/lucene-dev/201505.mbox/%3CCAMJgJxSXCHxDzJs5-C-pKFDEBQD6JbgxB=-xp7u143ekmgp...@mail.gmail.com%3E The ConfigSet used to create a collection should be read-only. Changes made via any of the Config APIs should only be applied to the collection where the operation is done and no to other collections that may be using the same ConfigSet. As discussed in the dev list: When a collection is created we should have two things, an immutable part (the ConfigSet) and a mutable part (configoverlay, generated schema, etc). The ConfigSet will still be placed in ZooKeeper under /configs but the mutable part should be placed under /collections/$COLLECTION_NAME/… [~romseygeek] suggested: {quote} A nice way of doing it would be to make it part of the SolrResourceLoader interface. The ZK resource loader could check in the collection-specific zknode first, and then under configs/, and we could add a writeResource() method that writes to the collection-specific node as well. Then all config I/O goes via the resource loader, and we have a way of keeping certain parts immutable. {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7656) maxTotalChars param not respected in LangDetect language detection implementation
[ https://issues.apache.org/jira/browse/SOLR-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Derek Wood updated SOLR-7656: - Attachment: SOLR-7656.patch maxTotalChars param not respected in LangDetect language detection implementation - Key: SOLR-7656 URL: https://issues.apache.org/jira/browse/SOLR-7656 Project: Solr Issue Type: Bug Components: contrib - LangId Affects Versions: Trunk, 5.2 Reporter: Derek Wood Priority: Minor Attachments: SOLR-7656.patch The LangDetect wrapper code incorrectly uses the maxTotalChars param [1] to configure the max field length in the LangDetect library [2] [1] https://svn.apache.org/viewvc/lucene/dev/trunk/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java?annotate=1643377#l51 [2] https://github.com/shuyo/language-detection/blob/master/src/com/cybozu/labs/langdetect/Detector.java#L170 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool
[ https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579473#comment-14579473 ] ASF subversion and git services commented on SOLR-7512: --- Commit 1684509 from [~markrmil...@gmail.com] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1684509 ] SOLR-7512: SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool. SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool -- Key: SOLR-7512 URL: https://issues.apache.org/jira/browse/SOLR-7512 Project: Solr Issue Type: Bug Components: contrib - MapReduce Affects Versions: 5.1 Reporter: Adam McElwee Assignee: Mark Miller Priority: Blocker Fix For: Trunk, 5.2.1 Attachments: SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch, SOLR-7512.patch Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because invalid `solr.xml` contents were being written to the solr home dir zip. My guess is that a 5.0 change made the invalid file start to matter. The error manifests as: {code:java} Error: java.lang.IllegalStateException: Failed to initialize record writer for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, attempt_1430953999892_0012_r_01_1 at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126) at org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170) Caused by: org.apache.solr.common.SolrException: org.xml.sax.SAXParseException; Premature end of file. at org.apache.solr.core.Config.init(Config.java:156) at org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127) at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110) at org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138) at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142) at org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162) at org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119) ... 9 more Caused by: org.xml.sax.SAXParseException; Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.init(Config.java:145) ... 15 more {code} The last version that I've successfully used `MapReduceIndexerTool` was 4.9, and I verified that this patch resolves the issue for me (testing on 5.1). I spent a couple hours trying to write a simple test case to exhibit the error, but I haven't quite figured out how to deal with the {noformat}java.security.AccessControlException: java.io.FilePermission ...{noformat} errors. Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7655) Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos
[ https://issues.apache.org/jira/browse/SOLR-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579478#comment-14579478 ] David Smiley commented on SOLR-7655: This was discovered via a commenter here: https://issues.apache.org/jira/browse/SOLR-5855?focusedCommentId=14578437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14578437 (the purpose of that issue is unrelated to the discovery in the stack traces posted) Perf bug- DefaultSolrHighlighter.getSpanQueryScorer triggers MultiFields.getMergedFieldInfos Key: SOLR-7655 URL: https://issues.apache.org/jira/browse/SOLR-7655 Project: Solr Issue Type: Bug Components: highlighter Affects Versions: 5.0 Reporter: David Smiley Assignee: David Smiley It appears grabbing the FieldInfos from the SlowCompositeReaderWrapper is slow. It isn't cached. The DefaultSolrHighligher in SOLR-6196 (v5.0) uses it to ascertain if there are payloads. Instead it can grab it from the Terms instance, which is cached. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6538) Improve per-segment diagnostics for IBM J9 JVM
[ https://issues.apache.org/jira/browse/LUCENE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579486#comment-14579486 ] Robert Muir commented on LUCENE-6538: - do we really need addSysPropIfNotNull(diagnostics, java.vm.version) ? Why not just this: {code} diagnostics.add(.., System.getProperty(java.vm.version, undefined)); {code} Improve per-segment diagnostics for IBM J9 JVM -- Key: LUCENE-6538 URL: https://issues.apache.org/jira/browse/LUCENE-6538 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6538.patch Spinoff from http://lucene.markmail.org/thread/dq4wioomu4o346ej where I noticed that the per-segment diagnostics (seen from CheckIndex) only report 1.7.0 as the JVM version, without any update level. Talking to [~rcmuir] it looks like we just need to add java.vm.version and java.runtime.version sysprops into the diagnostics. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579506#comment-14579506 ] Erik Hatcher commented on SOLR-7652: Except, this now fails on Java 8, ARG, 5x and trunk: {code} 2015-06-09 20:13:48.485 ERROR (qtp434176574-13) [ x:files] o.a.s.c.SolrCore java.lang.ClassCastException: Cannot cast jdk.internal.dynalink.beans.StaticClass to java.lang.Class at java.lang.invoke.MethodHandleImpl.newClassCastException(MethodHandleImpl.java:312) at java.lang.invoke.MethodHandleImpl.castReference(MethodHandleImpl.java:307) at jdk.nashorn.internal.scripts.Script$\^eval\_.processAdd(eval:74) at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:537) at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:209) at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:378) {code} Time to research if there's a way to get this to work in both versions of the built-in Java 7 and 8 JavaScript engines. For now I'll leave it with the fix for Java7 on 5x, and Java8 on trunk. example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.3, 5.2.1 Attachments: SOLR-7652.patch A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579519#comment-14579519 ] Upayavira commented on SOLR-7638: - [~ehatcher] ahh yes, I haven't done the dump tab. Commit away, I won't get to do dump before tomorrow so we should roll with what we have. Thanks! Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6541) Geo3d WGS84 parameters not quite right
[ https://issues.apache.org/jira/browse/LUCENE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579729#comment-14579729 ] Karl Wright commented on LUCENE-6541: - [~nknize], did you have a chance to evaluate geo3d with WGS84 support? Geo3d WGS84 parameters not quite right -- Key: LUCENE-6541 URL: https://issues.apache.org/jira/browse/LUCENE-6541 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6541.patch The PlanetModel parameters for WGS84 are correct only to within 7 significant digits. In particular, the polar radius is not quite the WGS84 value. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6480) Extend Simple GeoPointField Type to 3d
[ https://issues.apache.org/jira/browse/LUCENE-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579742#comment-14579742 ] Karl Wright commented on LUCENE-6480: - [~nknize]: Now that you've got geo3d with a WGS84 planet model, any interest in this ticket? Extend Simple GeoPointField Type to 3d --- Key: LUCENE-6480 URL: https://issues.apache.org/jira/browse/LUCENE-6480 Project: Lucene - Core Issue Type: New Feature Components: core/index Reporter: Nicholas Knize [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] proposes a simple GeoPointField type to lucene core. This field uses 64bit encoding of 2 dimensional points to construct sorted term representations of GeoPoints (aka: GeoHashing). This feature investigates adding support for encoding 3 dimensional GeoPoints, either by extending GeoPointField to a Geo3DPointField or adding an additional 3d constructor. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.2 - Build # 13 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.2/13/ 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Captured an uncaught exception in thread: Thread[id=5187, name=collection3, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=5187, name=collection3, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: java.lang.RuntimeException: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:56121, https://127.0.0.1:57203, https://127.0.0.1:53722, https://127.0.0.1:60521, https://127.0.0.1:42767] at __randomizedtesting.SeedInfo.seed([47EA7E51B7880699]:0) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:887) Caused by: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:56121, https://127.0.0.1:57203, https://127.0.0.1:53722, https://127.0.0.1:60521, https://127.0.0.1:42767] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:884) Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:53722: KeeperErrorCode = Session expired for /overseer/collection-queue-work/qn- at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328) ... 5 more Build Log: [...truncated 21210 lines...] [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest [junit4] 2 Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.2/solr/build/solr-core/test/J2/temp/solr.cloud.CollectionsAPIDistributedZkTest_47EA7E51B7880699-001/init-core-data-001 [junit4] 2 869111 T4831 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (true) and clientAuth (false) [junit4] 2 869112 T4831 oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system property: / [junit4] 2 869116 T4831 oasc.ZkTestServer.run STARTING ZK TEST SERVER [junit4] 2 869116 T4832 oasc.ZkTestServer$2$1.setClientPort client port:0.0.0.0/0.0.0.0:0 [junit4] 2 869116 T4832 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server [junit4] 2 869216 T4831 oasc.ZkTestServer.run start zk server on port:46516 [junit4] 2 869217 T4831 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider [junit4] 2 869217 T4831 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper [junit4] 2 869220 T4839 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@294b5be8 name:ZooKeeperConnection Watcher:127.0.0.1:46516 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 869220 T4831 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper [junit4] 2 869220 T4831 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider [junit4] 2 869220 T4831 oascc.SolrZkClient.makePath makePath: /solr [junit4] 2 869223 T4831 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider [junit4] 2 869223 T4831 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper [junit4] 2 869224 T4842 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@b4cfbfa name:ZooKeeperConnection Watcher:127.0.0.1:46516/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 869225 T4831 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper [junit4] 2 869225 T4831 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider [junit4] 2 869225 T4831
[jira] [Commented] (SOLR-7468) Kerberos authentication module
[ https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579704#comment-14579704 ] Gregory Chanan commented on SOLR-7468: -- bq. About #4: Without the ticket caching support, minikdc has issues when multiple clients try to get tickets for the same principal (from the same host). What is a client? A thread? I looked into upgrading the hadoop minikdc dependency a month or so back but a release wasn't ready. When I have some time I'll look again. bq. Also, the link to your codebase for the SolrHadoopAuthenticationFilter seems internal as I can't get to it. Whoops my apology! I meant: https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106 Kerberos authentication module -- Key: SOLR-7468 URL: https://issues.apache.org/jira/browse/SOLR-7468 Project: Solr Issue Type: New Feature Components: security Reporter: Ishan Chattopadhyaya Assignee: Anshum Gupta Fix For: 5.2 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, hoss_trunk_r1681791_TEST-org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.xml, hoss_trunk_r1681791_tests-failures.txt SOLR-7274 introduces a pluggable authentication framework. This issue provides a Kerberos plugin implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7468) Kerberos authentication module
[ https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579636#comment-14579636 ] Gregory Chanan edited comment on SOLR-7468 at 6/9/15 10:47 PM: --- Sorry for the delay, I took a look at this. Some notes below: 1) Great work [~ichattopadhyaya]! So glad to see this in Apache Solr. 2) The KerberosFilter should either check that kerberos is actually enabled (via type) or be a private nested class of the KerberosPlugin, to ensure it is only used with Kerberos. That can be handled as a separate jira. 3) I'm a little concerned with the NoContext code in KerberosPlugin moving forward (I understand this is more a generic auth question than kerberos specific). For example, in the latest version of the filter we are using at Cloudera, we play around with the ServletContext in order to pass information around (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106). Is there any way we can get the actual ServletContext in a plugin? Again, this doesn't need to change right now. 4) {code}/** 43 * Test 5 nodes Solr cluster with Kerberos plugin enabled. 44 * This test is Ignored right now as Mini KDC has a known bug that 45 * doesn't allow us to run multiple nodes on the same host. 46 * https://issues.apache.org/jira/browse/HADOOP-9893 47 */ {code} This description is a little confusing -- it sounds like you can't run multiple MiniKDC nodes on one host, but you typically woldn't want to do that so I doubt that is the issue. What exactly is the issue? 5) {code} String jaas = Client {\n 102 + com.sun.security.auth.module.Krb5LoginModule required\n 103 + useKeyTab=true\n 104 + keyTab=\+keytabFile.getAbsolutePath()+\\n 105 + storeKey=true\n 106 + useTicketCache=false\n 107 + doNotPrompt=true\n 108 + debug=true\n 109 + principal=\+principal+\;\n 110 + };\n 111 + Server {\n 112 + com.sun.security.auth.module.Krb5LoginModule required\n 113 + useKeyTab=true\n 114 + keyTab=\+keytabFile.getAbsolutePath()+\\n 115 + storeKey=true\n 116 + doNotPrompt=true\n 117 + useTicketCache=false\n 118 + debug=true\n 119 + principal=\+zkServerPrincipal+\;\n 120 + };\n; {code} It would be nice if we could just create a jaas configuration and pass it to the client, like we do in SOLR-6915. Again, nothing that needs to change now, but having the jaas configuration management in one place (the KerberosTestUtil) is ideal, because that code is known to be fragile, i.e. different JVMs require different parameters, capitalization, etc. If we have that sort of code around in different tests we won't be able to handle that. 6) {code}httpClient.addRequestInterceptor(bufferedEntityInterceptor);{code} I think I mentioned this in a previous JIRA, but it would be nice to do some more investigation to figure out if we can avoid this. The hadoop auth filter has some code where you can use a cookie to avoid re-doing the negotiate...obviously you'd only want to do that if ssl was enabled. was (Author: gchanan): Sorry for the delay, I took a look at this. Some notes below: 1) Great work [~ichattopadhyaya]! So glad to see this in Apache Solr. 2) The KerberosFilter should either check that kerberos is actually enabled (via type) or be a private nested class of the KerberosPlugin, to ensure it is only used with Kerberos. That can be handled as a separate jira. 3) I'm a little concerned with the NoContext code in KerberosPlugin moving forward (I understand this is more a generic auth question than kerberos specific). For example, in the latest version of the filter we are using at Cloudera, we play around with the ServletContext in order to pass information around (http://github.mtv.cloudera.com/CDH/lucene-solr/blob/1df8c1041fda00c82df08c03e2c07c6f346c5671/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L122-L123). Is there any way we can get the actual ServletContext in a plugin? Again, this doesn't need to change right now. 4) {code}/** 43 * Test 5 nodes Solr cluster with Kerberos plugin enabled. 44 * This test is Ignored right now as Mini KDC has a known bug that 45 * doesn't allow us to run multiple nodes on the same host. 46 * https://issues.apache.org/jira/browse/HADOOP-9893 47 */ {code} This description is a little confusing -- it sounds like you can't run multiple MiniKDC nodes on one host, but you typically woldn't want to do that so I doubt that is the issue. What exactly is the issue? 5) {code} String jaas = Client {\n 102
[jira] [Commented] (LUCENE-6522) Reproducible fieldcache AIOOBE only on J9
[ https://issues.apache.org/jira/browse/LUCENE-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579708#comment-14579708 ] Kevin Langman commented on LUCENE-6522: --- Found the problem. Now I am just looking for an appropriate fix. Reproducible fieldcache AIOOBE only on J9 - Key: LUCENE-6522 URL: https://issues.apache.org/jira/browse/LUCENE-6522 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Haven't dug in yet, just: * reproduces easily on J9 * does not happen on Oracle JVM {noformat} [junit4] Suite: org.apache.lucene.uninverting.TestFieldCacheVsDocValues [junit4] IGNOR/A 0.51s J2 | TestFieldCacheVsDocValues.testHugeBinaryValueLimit [junit4] Assumption #1: test requires codec with limits on max binary field length [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestFieldCacheVsDocValues -Dtests.method=testSortedSetFixedLengthVsUninvertedField -Dtests.seed=831619B333C362E6 -Dtests.locale=es_UY -Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] ERROR 0.54s J2 | TestFieldCacheVsDocValues.testSortedSetFixedLengthVsUninvertedField [junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException [junit4] at __randomizedtesting.SeedInfo.seed([831619B333C362E6:B6EC641493EA4AD3]:0) [junit4] at org.apache.lucene.uninverting.DocTermOrds$OrdWrappedTermsEnum.seekCeil(DocTermOrds.java:692) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.assertEquals(TestFieldCacheVsDocValues.java:570) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.assertEquals(TestFieldCacheVsDocValues.java:511) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.doTestSortedSetVsUninvertedField(TestFieldCacheVsDocValues.java:385) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.testSortedSetFixedLengthVsUninvertedField(TestFieldCacheVsDocValues.java:105) [junit4] at java.lang.Thread.run(Thread.java:785) [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestFieldCacheVsDocValues -Dtests.method=testSortedSetVariableLengthVsUninvertedField -Dtests.seed=831619B333C362E6 -Dtests.locale=es_UY -Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] ERROR 0.42s J2 | TestFieldCacheVsDocValues.testSortedSetVariableLengthVsUninvertedField [junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException [junit4] at __randomizedtesting.SeedInfo.seed([831619B333C362E6:2AB51ED6D324E426]:0) [junit4] at org.apache.lucene.uninverting.DocTermOrds$OrdWrappedTermsEnum.seekCeil(DocTermOrds.java:692) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.assertEquals(TestFieldCacheVsDocValues.java:570) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.assertEquals(TestFieldCacheVsDocValues.java:511) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.doTestSortedSetVsUninvertedField(TestFieldCacheVsDocValues.java:385) [junit4] at org.apache.lucene.uninverting.TestFieldCacheVsDocValues.testSortedSetVariableLengthVsUninvertedField(TestFieldCacheVsDocValues.java:112) [junit4] at java.lang.Thread.run(Thread.java:785) [junit4] 2 NOTE: leaving temporary files on disk at: /home/rmuir/workspace/trunk-ibm/lucene/build/misc/test/J2/temp/lucene.uninverting.TestFieldCacheVsDocValues 831619B333C362E6-001 [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50): {indexed=FSTOrd50, id=Lucene50(blocksize=128)}, docValues:{dv=DocValuesFormat(name=Asserting), field=DocValuesFormat(name=Asserting)}, sim=DefaultSimilarity, locale=es_UY, timezone=Atlantic/Bermuda [junit4] 2 NOTE: Linux 3.13.0-49-generic amd64/IBM Corporation 1.8.0 (64-bit)/cpus=8,threads=1,free=10179616,total=32243712 [junit4] 2 NOTE: All tests run in this JVM: [TestDocTermOrds, TestNumericTerms32, TestFieldCacheVsDocValues] [junit4] Completed [21/25] on J2 in 4.50s, 10 tests, 2 errors, 1 skipped FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12829 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12829/ Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudPivotFacet.test Error Message: init query failed: {main(facet=truefacet.pivot=dense_pivot_ti1%2Cpivot_z_sfacet.pivot=%7B%21stats%3Dst1%7Dpivot_z_s%2Cpivot_x_s%2Cpivot_f1facet.limit=15facet.offset=3facet.missing=false),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+1578%5Dstats=truestats.field=%7B%21key%3Dsk1+tag%3Dst1%2Cst2%7Dpivot_istats.field=%7B%21key%3Dsk2+tag%3Dst2%2Cst3%7Dpivot_tfstats.field=%7B%21key%3Dsk3+tag%3Dst3%2Cst4%7Ddense_pivot_i1_test_miss=false)}: No live SolrServers available to handle this request:[http://127.0.0.1:45696/he_bn/f/collection1] Stack Trace: java.lang.RuntimeException: init query failed: {main(facet=truefacet.pivot=dense_pivot_ti1%2Cpivot_z_sfacet.pivot=%7B%21stats%3Dst1%7Dpivot_z_s%2Cpivot_x_s%2Cpivot_f1facet.limit=15facet.offset=3facet.missing=false),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+1578%5Dstats=truestats.field=%7B%21key%3Dsk1+tag%3Dst1%2Cst2%7Dpivot_istats.field=%7B%21key%3Dsk2+tag%3Dst2%2Cst3%7Dpivot_tfstats.field=%7B%21key%3Dsk3+tag%3Dst3%2Cst4%7Ddense_pivot_i1_test_miss=false)}: No live SolrServers available to handle this request:[http://127.0.0.1:45696/he_bn/f/collection1] at __randomizedtesting.SeedInfo.seed([3769E7A01105148C:BF3DD87ABFF97974]:0) at org.apache.solr.cloud.TestCloudPivotFacet.assertPivotCountsAreCorrect(TestCloudPivotFacet.java:255) at org.apache.solr.cloud.TestCloudPivotFacet.test(TestCloudPivotFacet.java:228) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[jira] [Updated] (LUCENE-6529) NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues
[ https://issues.apache.org/jira/browse/LUCENE-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated LUCENE-6529: - Attachment: LUCENE-6529.patch Some of Solr's faceting tests uncovered an AIOOBE due to my last patch when dealing with empty indexes - so i updated TestDocTermOrds and TestUninvertingReader to have similar checks to catch things like this, and then updated the changes in DocTermOrds to better account for this. patch also updated to resolve the nocommits about increasing randomization. Still hammering... NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues - Key: LUCENE-6529 URL: https://issues.apache.org/jira/browse/LUCENE-6529 Project: Lucene - Core Issue Type: Bug Reporter: Hoss Man Attachments: LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch Digging into SOLR-7631 and SOLR-7605 I became fairly confident that the only explanation of the behavior i was seeing was some sort of bug in either the randomized codec/postings-format or the UninvertedReader, that was only evident when two were combined and used on a multivalued Numeric Field using precision steps. But since i couldn't find any -Dtests.codec or -Dtests.postings.format options that would cause the bug 100% regardless of seed, I switched tactices and focused on reproducing the problem using UninvertedReader directly and checking the SortedSetDocValues.getValueCount(). I now have a test that fails frequently (and consistently for any seed i find), but only with -Dtests.codec=random -- override it with -Dtests.codec=default and everything works fine (based on the exhaustive testing I did in the linked issues, i suspect every named codec works fine - but i didn't re-do that testing here) The failures only seem to happen when checking the SortedSetDocValues.getValueCount() of a SlowCompositeReaderWrapper around the UninvertedReader -- which suggests the root bug may actually be in SlowCompositeReaderWrapper? (but still has some dependency on the random codec) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6532) Add GeoPointDistanceQuery for GeoPointField type
[ https://issues.apache.org/jira/browse/LUCENE-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6532: --- Attachment: LUCENE-6532.patch Initial work in progress. This first cut is super super slow, but gets the math foundation in the open for review. Notable additions: * Initial GeoPointDistanceQuery class as an extension to GeoPointInPolygonQuery * Converts a point radius to a polygon approximated by tangential line segments using Vincenty's Direct and Indirect Solutions of Geodesics on the Ellipsoid... * Adds ECEF to ENU and LLA to ENU coordinate conversions for local coordinate system calculations (supports 3D) * Randomized test support for PointDistanceQuery testing Separately I've started adding these computations to BKDTree. Point radius queries on BKDTree should be far faster than simply using the Terms Dictionary and Postings list as the KD-Tree structure is naturally organized by location. This should enable us to stop traversing the tree once we've found an inner node that matches the distance query. I'll file a separate issue for this feature and work it in tandem. Add GeoPointDistanceQuery for GeoPointField type Key: LUCENE-6532 URL: https://issues.apache.org/jira/browse/LUCENE-6532 Project: Lucene - Core Issue Type: New Feature Components: core/search Reporter: Nicholas Knize Attachments: LUCENE-6532.patch [https://issues.apache.org/jira/browse/LUCENE-6481 | LUCENE-6481] adds GeoPointField w/ GeoPointInBBox and GeoPointInPolygon queries. This feature adds GeoPointDistanceQuery to support point radius queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6540) Add BKDPointDistanceQuery
Nicholas Knize created LUCENE-6540: -- Summary: Add BKDPointDistanceQuery Key: LUCENE-6540 URL: https://issues.apache.org/jira/browse/LUCENE-6540 Project: Lucene - Core Issue Type: New Feature Components: core/search Reporter: Nicholas Knize LUCENE-6532 adds the supporting mathematics for point-distance computation based on the ellipsoid (using Vincenty's Direct and Inverse solutions). This feature adds BKDPointDistance query function for finding all documents that match the distance criteria from a provided geo point. This should out perform other solutions since we can stop traversing the BKD tree once we've found an internal node that matches the given criteria. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
[ https://issues.apache.org/jira/browse/LUCENE-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579637#comment-14579637 ] Michael McCandless commented on LUCENE-6539: bq. new HashSetLong(Arrays.asList(array)). Good, I'll fix. bq. However instead of keeping adding such queries to core, I think we should consider moving all our doc values queries to misc since they have complicated trade-offs and are only useful in expert use-cases? +1, I can move them here. {quote} bq. in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery This comment got me confused: I think in general these queries are more efficient when they match many documents, ie. even when an equivalent TermsQuery would not be used as a lead iterator in a conjunction? I think the only case when such a query matching few documents would be useful would be in a prohibited clause since these prohibited clauses can never be used to lead iteration anyway and are only used in a random-access fashion? {quote} Hmm this is hard to think about, but yes I was thinking about the there is some other MUST'd clause as the primary and then this query is a MUST_NOT of a big list of numeric IDs, use case. The per-hit cost is higher with these DocValuesXXX queries (the forward lookup + check) vs visiting postings and ORing bitsets that TermsQuery does (when there are enough terms), but the setup cost is higher with TermsQuery since it must lookup many terms across N segments, which is why I thought not matching too many total hits would favor DocValueXXXQuery with a large number of terms. E.g. in the extreme case where you pass a single term to your TemsQuery or DocValuesTermsQuery, matching many docs, and its the primary (only) clause in the query, TermsQuery should be much faster. bq. Its ok in current form to go to sandbox, but i think this needs to be integrated into the inverted approach so that based on index stats, lucene can just do the right thing. OK, or I can just WONTFIX this ... I just thought there are use cases where this post-filter approach would be much faster then the choices we have today, e.g. when an app has numeric IDs and wants to make big NOT in list clauses. I agree it would be better if we had only TermsQuery, and then it would figure out which strategy is best (use doc values, use numeric doc values if ids are really numeric, use postings) to take depending on index stats, whether clause is primary or not, etc... but this seems very tricky: I can't even properly think about the cases, see Adrien's comment above ;) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values --- Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6539.patch This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579646#comment-14579646 ] Upayavira commented on SOLR-7638: - Correct. Patch as is is what there is for now. Can do more later in the week. Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6541) Geo3d WGS84 parameters not quite right
Karl Wright created LUCENE-6541: --- Summary: Geo3d WGS84 parameters not quite right Key: LUCENE-6541 URL: https://issues.apache.org/jira/browse/LUCENE-6541 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright The PlanetModel parameters for WGS84 are correct only to within 7 significant digits. In particular, the polar radius is not quite the WGS84 value. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579608#comment-14579608 ] Ryan Josal commented on SOLR-6234: -- This is awesome, the normal !join qparser is mainly only good for fq, but with scoring, this is good as a q or subquery. Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 4.10.3, Trunk Reporter: Mikhail Khludnev Labels: features, patch, test Fix For: 5.0, Trunk Attachments: SOLR-6234.patch, SOLR-6234.patch it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - {{score=none}} is *default*, eg if you *omit* this localparam - supports {{b=100}} param to pass {{Query.setBoost()}}. - {{multiVals=true|false}} is introduced - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -there was a bug in cross core join, however there is a workaround for it- it's fixed in Dec'14 patch. Note: the development of this patch was sponsored by an anonymous contributor and approved for release under Apache License. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6539) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values
[ https://issues.apache.org/jira/browse/LUCENE-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579661#comment-14579661 ] Adrien Grand commented on LUCENE-6539: -- bq. OK, or I can just WONTFIX this I think you should commit it, it is a missing piece today since you can do this on SORTED or SORTED_SET but not NUMERIC or SORTED_NUMERIC while this new query is cheaper. Let's put it into sandbox if we want to be safe? Agreed that integration with TermsQuery would be wonderful, but I also see challenges on the way. :) Add DocValuesNumbersQuery, like DocValuesTermsQuery but works only with long values --- Key: LUCENE-6539 URL: https://issues.apache.org/jira/browse/LUCENE-6539 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.3 Attachments: LUCENE-6539.patch This query accepts any document where any of the provided set of longs was indexed into the specified field as a numeric DV field (NumericDocValuesField or SortedNumericDocValuesField). You can use it instead of DocValuesTermsQuery when you have field values that can be represented as longs. Like DocValuesTermsQuery, this is slowish in general, since it doesn't use an inverted data structure, but in certain cases (many terms/numbers and fewish matching hits) it should be faster than using TermsQuery because it's done as a post filter when other (faster) query clauses are MUST'd with it. In such cases it should also be faster than DocValuesTermsQuery since it skips having to resolve terms - ords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7468) Kerberos authentication module
[ https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579666#comment-14579666 ] Anshum Gupta commented on SOLR-7468: Also, the link to your codebase for the SolrHadoopAuthenticationFilter seems internal as I can't get to it. Kerberos authentication module -- Key: SOLR-7468 URL: https://issues.apache.org/jira/browse/SOLR-7468 Project: Solr Issue Type: New Feature Components: security Reporter: Ishan Chattopadhyaya Assignee: Anshum Gupta Fix For: 5.2 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, hoss_trunk_r1681791_TEST-org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.xml, hoss_trunk_r1681791_tests-failures.txt SOLR-7274 introduces a pluggable authentication framework. This issue provides a Kerberos plugin implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6541) Geo3d WGS84 parameters not quite right
[ https://issues.apache.org/jira/browse/LUCENE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Wright updated LUCENE-6541: Attachment: LUCENE-6541.patch Geo3d WGS84 parameters not quite right -- Key: LUCENE-6541 URL: https://issues.apache.org/jira/browse/LUCENE-6541 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6541.patch The PlanetModel parameters for WGS84 are correct only to within 7 significant digits. In particular, the polar radius is not quite the WGS84 value. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7468) Kerberos authentication module
[ https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579662#comment-14579662 ] Anshum Gupta commented on SOLR-7468: About #4: Without the ticket caching support, minikdc has issues when multiple clients try to get tickets for the same principal (from the same host). Kerberos authentication module -- Key: SOLR-7468 URL: https://issues.apache.org/jira/browse/SOLR-7468 Project: Solr Issue Type: New Feature Components: security Reporter: Ishan Chattopadhyaya Assignee: Anshum Gupta Fix For: 5.2 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, hoss_trunk_r1681791_TEST-org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.xml, hoss_trunk_r1681791_tests-failures.txt SOLR-7274 introduces a pluggable authentication framework. This issue provides a Kerberos plugin implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6540) Add BKDPointDistanceQuery
[ https://issues.apache.org/jira/browse/LUCENE-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6540: --- Description: LUCENE-6532 adds the supporting mathematics for point-distance computation based on the ellipsoid (using Vincenty's Direct and Inverse solutions). This feature adds BKDPointDistance query function to LUCENE-6477 for finding all documents that match the provided distance criteria from a given geo point. This should out perform other solutions since we can stop traversing the BKD tree once we've found an internal node that matches the given criteria. (was: LUCENE-6532 adds the supporting mathematics for point-distance computation based on the ellipsoid (using Vincenty's Direct and Inverse solutions). This feature adds BKDPointDistance query function for finding all documents that match the distance criteria from a provided geo point. This should out perform other solutions since we can stop traversing the BKD tree once we've found an internal node that matches the given criteria.) Add BKDPointDistanceQuery - Key: LUCENE-6540 URL: https://issues.apache.org/jira/browse/LUCENE-6540 Project: Lucene - Core Issue Type: New Feature Components: core/search Reporter: Nicholas Knize LUCENE-6532 adds the supporting mathematics for point-distance computation based on the ellipsoid (using Vincenty's Direct and Inverse solutions). This feature adds BKDPointDistance query function to LUCENE-6477 for finding all documents that match the provided distance criteria from a given geo point. This should out perform other solutions since we can stop traversing the BKD tree once we've found an internal node that matches the given criteria. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7468) Kerberos authentication module
[ https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579636#comment-14579636 ] Gregory Chanan commented on SOLR-7468: -- Sorry for the delay, I took a look at this. Some notes below: 1) Great work [~ichattopadhyaya]! So glad to see this in Apache Solr. 2) The KerberosFilter should either check that kerberos is actually enabled (via type) or be a private nested class of the KerberosPlugin, to ensure it is only used with Kerberos. That can be handled as a separate jira. 3) I'm a little concerned with the NoContext code in KerberosPlugin moving forward (I understand this is more a generic auth question than kerberos specific). For example, in the latest version of the filter we are using at Cloudera, we play around with the ServletContext in order to pass information around (http://github.mtv.cloudera.com/CDH/lucene-solr/blob/1df8c1041fda00c82df08c03e2c07c6f346c5671/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L122-L123). Is there any way we can get the actual ServletContext in a plugin? Again, this doesn't need to change right now. 4) {code}/** 43 * Test 5 nodes Solr cluster with Kerberos plugin enabled. 44 * This test is Ignored right now as Mini KDC has a known bug that 45 * doesn't allow us to run multiple nodes on the same host. 46 * https://issues.apache.org/jira/browse/HADOOP-9893 47 */ {code} This description is a little confusing -- it sounds like you can't run multiple MiniKDC nodes on one host, but you typically woldn't want to do that so I doubt that is the issue. What exactly is the issue? 5) {code} String jaas = Client {\n 102 + com.sun.security.auth.module.Krb5LoginModule required\n 103 + useKeyTab=true\n 104 + keyTab=\+keytabFile.getAbsolutePath()+\\n 105 + storeKey=true\n 106 + useTicketCache=false\n 107 + doNotPrompt=true\n 108 + debug=true\n 109 + principal=\+principal+\;\n 110 + };\n 111 + Server {\n 112 + com.sun.security.auth.module.Krb5LoginModule required\n 113 + useKeyTab=true\n 114 + keyTab=\+keytabFile.getAbsolutePath()+\\n 115 + storeKey=true\n 116 + doNotPrompt=true\n 117 + useTicketCache=false\n 118 + debug=true\n 119 + principal=\+zkServerPrincipal+\;\n 120 + };\n; {code} It would be nice if we could just create a jaas configuration and pass it to the client, like we do in SOLR-6915. Again, nothing that needs to change now, but having the jaas configuration management in one place (the KerberosTestUtil) is ideal, because that code is known to be fragile, i.e. different JVMs require different parameters, capitalization, etc. If we have that sort of code around in different tests we won't be able to handle that. 6) {code}httpClient.addRequestInterceptor(bufferedEntityInterceptor);{code} I think I mentioned this in a previous JIRA, but it would be nice to do some more investigation to figure out if we can avoid this. The hadoop auth filter has some code where you can use a cookie to avoid re-doing the negotiate...obviously you'd only want to do that if ssl was enabled. Kerberos authentication module -- Key: SOLR-7468 URL: https://issues.apache.org/jira/browse/SOLR-7468 Project: Solr Issue Type: New Feature Components: security Reporter: Ishan Chattopadhyaya Assignee: Anshum Gupta Fix For: 5.2 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, hoss_trunk_r1681791_TEST-org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.xml, hoss_trunk_r1681791_tests-failures.txt SOLR-7274 introduces a pluggable authentication framework. This issue provides a Kerberos plugin implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher reassigned SOLR-7652: -- Assignee: Erik Hatcher example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.2.1 A colleague reported that example/files does not work with Java 7, but did with Java 8. There is something in the update-script.js that is Java 8 specific. Details not yet available... forthcoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7638: Attachment: SOLR-7638.patch This patch: * fixes the displaying of the cloud tab * fixes the displaying of the tree tab * ports the 'paging' functionality from the original UI Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Priority: Minor Attachments: SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-7652: --- Attachment: SOLR-7652.patch Here's a patch that fixes example/files update-script.js for Java7 example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.2.1 Attachments: SOLR-7652.patch A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data
[ https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-7636: - Comment: was deleted (was: Commit 1684395 from [~noble.paul] in branch 'dev/trunk' [ https://svn.apache.org/r1684395 ] SOLR-7636: Update from ZK before returning the status) CLUSTERSTATUS Api should not go to OCP to fetch data Key: SOLR-7636 URL: https://issues.apache.org/jira/browse/SOLR-7636 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Priority: Minor Fix For: Trunk, 5.3 Attachments: SOLR-7636.patch Currently it does multiple ZK operations which is not required. It should just read the status from ZK and return from the CollectionsHandler -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6534) Analysis tests are angry at JDK9 B67
[ https://issues.apache.org/jira/browse/LUCENE-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6534: -- Labels: Java9 (was: ) Analysis tests are angry at JDK9 B67 Key: LUCENE-6534 URL: https://issues.apache.org/jira/browse/LUCENE-6534 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Labels: Java9 These tests are failing in crazy ways even with -Xint. Just grab B67 and run 'ant test' from lucene/analyzers. {noformat} [junit4] Tests with failures (first 10 out of 26): [junit4] - org.apache.lucene.analysis.core.TestAnalyzers.testRandomHugeStrings [junit4] - org.apache.lucene.analysis.standard.TestClassicAnalyzer.testRandomHugeStrings [junit4] - org.apache.lucene.analysis.reverse.TestReverseStringFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.cjk.TestCJKWidthFilter.testRandomData [junit4] - org.apache.lucene.analysis.de.TestGermanLightStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.charfilter.TestMappingCharFilter.testRandomMaps [junit4] - org.apache.lucene.analysis.charfilter.TestMappingCharFilter.testRandom [junit4] - org.apache.lucene.analysis.en.TestEnglishMinimalStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.no.TestNorwegianMinimalStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.ngram.EdgeNGramTokenFilterTest.testRandomStrings {noformat} Maybe one of the charset changes or similar? I haven't tried to boil any of these down yet. They do not reproduce, its like there is some 'internal state' involved... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7638: Attachment: SOLR-7638-simple.patch This simpler patch (SOLR-7638-simple.patch) keeps itself to just fixing the cloud tab, and excludes the work to get paging working. Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Priority: Minor Attachments: SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6535) Geo3D test failure, June 6th
[ https://issues.apache.org/jira/browse/LUCENE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578842#comment-14578842 ] Karl Wright commented on LUCENE-6535: - Ok, I was able to reproduce it and I should have a fix shortly. Geo3D test failure, June 6th Key: LUCENE-6535 URL: https://issues.apache.org/jira/browse/LUCENE-6535 Project: Lucene - Core Issue Type: Bug Components: modules/spatial Affects Versions: 5.2 Reporter: David Smiley Assignee: David Smiley This reproduces: {noformat} Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12789/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#5 seed=[ADFCC7193C72FA89:9BDCDB8859624E4]} Error Message: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} Stack Trace: java.lang.AssertionError: [Intersects] qIdx:34 Shouldn't match I#1:Rect(minX=131.0,maxX=143.0,minY=39.0,maxY=54.0) Q:Geo3dShape{planetmodel=PlanetModel.SPHERE, shape=GeoPath: {planetmodel=PlanetModel.SPHERE, width=0.5061454830783556(29.0), points={[[X=0.5155270860898133, Y=-0.25143936017440033, Z=0.8191520442889918], [X=-6.047846824324981E-17, Y=9.57884834439237E-18, Z=-1.0], [X=-0.5677569555011356, Y=0.1521300177236823, Z=0.8090169943749475], [X=5.716531405282095E-17, Y=2.1943708116382607E-17, Z=-1.0]]}}} at __randomizedtesting.SeedInfo.seed([ADFCC7193C72FA89:9BDCDB8859624E4]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:116) at org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56) at org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
Looks like there are several small fixes that need to be added. I'll cut an RC tomorrow morning India time so that we have enough time to back port these items. I'll also setup a local Jenkins build for 5.2 On 09-Jun-2015 6:10 pm, Erik Hatcher erik.hatc...@gmail.com wrote: And here’s a fix worth getting into 5.2.1 while we’re at it as well: https://issues.apache.org/jira/browse/SOLR-7652 — Erik Hatcher, Senior Solutions Architect http://www.lucidworks.com On Jun 8, 2015, at 3:15 PM, Anshum Gupta ans...@anshumgupta.net wrote: +1 for bug fix releases. Though I'd say we can wait for a couple of days and give some time to others to fix other bugs that they'd want to get into 5.2.1, if there are any. P.S.: Who's the RM? :-) On Mon, Jun 8, 2015 at 10:06 AM, Shawn Heisey apa...@elyograg.org wrote: I broke it. SOLR-7588 fixes it. It hasn't been committed yet. The dataimport section of the admin UI doesn't work because I apparently put coffeescript into the admin UI instead of javascript. Is this a bad enough problem to warrant a bugfix release? Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12823 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12823/ Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls Error Message: Shard split did not complete. Last recorded state: running expected:[completed] but was:[running] Stack Trace: org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: running expected:[completed] but was:[running] at __randomizedtesting.SeedInfo.seed([1400AAAE9C00B9F9:4C6426CF9A6A112D]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data
[ https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578791#comment-14578791 ] ASF subversion and git services commented on SOLR-7636: --- Commit 1684395 from [~noble.paul] in branch 'dev/trunk' [ https://svn.apache.org/r1684395 ] SOLR-7636: Update from ZK before returning the status CLUSTERSTATUS Api should not go to OCP to fetch data Key: SOLR-7636 URL: https://issues.apache.org/jira/browse/SOLR-7636 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Priority: Minor Fix For: Trunk, 5.3 Attachments: SOLR-7636.patch Currently it does multiple ZK operations which is not required. It should just read the status from ZK and return from the CollectionsHandler -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6534) Analysis tests are angry at JDK9 B67
[ https://issues.apache.org/jira/browse/LUCENE-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-6534. - Resolution: Not A Problem Closing this one as there is a bug filed at openjdk: https://bugs.openjdk.java.net/browse/JDK-8086046 Analysis tests are angry at JDK9 B67 Key: LUCENE-6534 URL: https://issues.apache.org/jira/browse/LUCENE-6534 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir These tests are failing in crazy ways even with -Xint. Just grab B67 and run 'ant test' from lucene/analyzers. {noformat} [junit4] Tests with failures (first 10 out of 26): [junit4] - org.apache.lucene.analysis.core.TestAnalyzers.testRandomHugeStrings [junit4] - org.apache.lucene.analysis.standard.TestClassicAnalyzer.testRandomHugeStrings [junit4] - org.apache.lucene.analysis.reverse.TestReverseStringFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.cjk.TestCJKWidthFilter.testRandomData [junit4] - org.apache.lucene.analysis.de.TestGermanLightStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.charfilter.TestMappingCharFilter.testRandomMaps [junit4] - org.apache.lucene.analysis.charfilter.TestMappingCharFilter.testRandom [junit4] - org.apache.lucene.analysis.en.TestEnglishMinimalStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.no.TestNorwegianMinimalStemFilter.testRandomStrings [junit4] - org.apache.lucene.analysis.ngram.EdgeNGramTokenFilterTest.testRandomStrings {noformat} Maybe one of the charset changes or similar? I haven't tried to boil any of these down yet. They do not reproduce, its like there is some 'internal state' involved... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7652) example/files update-script.js does not work on Java7
[ https://issues.apache.org/jira/browse/SOLR-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-7652: --- Description: A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} was: A colleague reported that example/files does not work with Java 7, but did with Java 8. There is something in the update-script.js that is Java 8 specific. Details not yet available... forthcoming. example/files update-script.js does not work on Java7 - Key: SOLR-7652 URL: https://issues.apache.org/jira/browse/SOLR-7652 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Erik Hatcher Assignee: Erik Hatcher Fix For: 5.2.1 A colleague reported that example/files does not work with Java 7, but did with Java 8. {code} $ bin/solr create -c files -d example/files/conf/ Setup new core instance directory: /Users/erikhatcher/dev/clean-branch_5x/solr/server/solr/files Creating new core 'files' using command: http://localhost:8983/solr/admin/cores?action=CREATEname=filesinstanceDir=files Failed to create core 'files' due to: Error CREATEing SolrCore 'files': Unable to create core [files] Caused by: missing name after . operator (Unknown source#73) {code} with this in solr.log: {code} Caused by: org.apache.solr.common.SolrException: Unable to evaluate script: update-script.js at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:313) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.inform(StatelessScriptUpdateProcessorFactory.java:227) ... 33 more Caused by: javax.script.ScriptException: sun.org.mozilla.javascript.internal.EvaluatorException: missing name after . operator (Unknown source#73) in Unknown source at line number 73 at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:224) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249) at org.apache.solr.update.processor.StatelessScriptUpdateProcessorFactory.initEngines(StatelessScriptUpdateProcessorFactory.java:311) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
See https://issues.apache.org/jira/browse/SOLR-7638 Whilst I'd love to see the paging functionality in the cloud tab, I'd suggest we leave that out of a point release, and just apply the SOLR-7638- simple.patch file which makes three small, simple changes to get the angular cloud tab back functioning. Upayavira On Tue, Jun 9, 2015, at 12:20 PM, Upayavira wrote: I've completed work on the cloud tab for the AngularJS UI. I'm now making final checks and preparing a patch that I hope someone will commit for me :-) Thanks! Upayavira On Tue, Jun 9, 2015, at 11:55 AM, Shalin Shekhar Mangar wrote: Go ahead Adrien. I'll cut the RC when you're done. On Tue, Jun 9, 2015 at 3:24 PM, Adrien Grand jpou...@gmail.com wrote: I would like to get https://issues.apache.org/jira/browse/LUCENE-6527 in 5.2.1 too. It is quite a bad performance bug if you use a filtering TermQuery as it will load norms for nothing. On Mon, Jun 8, 2015 at 9:39 PM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: I can volunteer as RM unless someone else wants to. On Tue, Jun 9, 2015 at 12:45 AM, Anshum Gupta ans...@anshumgupta.net wrote: +1 for bug fix releases. Though I'd say we can wait for a couple of days and give some time to others to fix other bugs that they'd want to get into 5.2.1, if there are any. P.S.: Who's the RM? :-) On Mon, Jun 8, 2015 at 10:06 AM, Shawn Heisey apa...@elyograg.org wrote: I broke it. SOLR-7588 fixes it. It hasn't been committed yet. The dataimport section of the admin UI doesn't work because I apparently put coffeescript into the admin UI instead of javascript. Is this a bad enough problem to warrant a bugfix release? Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta -- Regards, Shalin Shekhar Mangar. -- Adrien --- -- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Shalin Shekhar Mangar.
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578827#comment-14578827 ] Joel Bernstein commented on SOLR-7560: -- Looks great! A little background on this discussion. [~dpgove] is describing the Streaming Expression (SOLR-7377) syntax for the RollupStream which is being added in this ticket to support the SQL aggregate functions. The RollupStream does Map/Reduce style aggregations where group by fields are sorted first and the aggregates are rolled up one group at a time. This technique will be very strong for time series rollups and aggregating high cardinality fields. There will be other aggregation streams added in the future that tap into Solr faceting directly. Parallel SQL Support Key: SOLR-7560 URL: https://issues.apache.org/jira/browse/SOLR-7560 Project: Solr Issue Type: New Feature Components: clients - java, search Reporter: Joel Bernstein Fix For: 5.3 Attachments: SOLR-7560.patch This ticket provides support for executing *Parallel SQL* queries across SolrCloud collections. The SQL engine will be built on top of the Streaming API (SOLR-7082), which provides support for *parallel relational algebra* and *real-time map-reduce*. Basic design: 1) A new SQLHandler will be added to process SQL requests. The SQL statements will be compiled to live Streaming API objects for parallel execution across SolrCloud worker nodes. 2) SolrCloud collections will be abstracted as *Relational Tables*. 3) The Presto SQL parser will be used to parse the SQL statements. 4) A JDBC thin client will be added as a Solrj client. This ticket will focus on putting the framework in place and providing basic SELECT support and GROUP BY aggregate support. Future releases will build on this framework to provide additional SQL features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Looks like I broke Solr 5.2.0 - do we need a 5.2.1?
And here’s a fix worth getting into 5.2.1 while we’re at it as well: https://issues.apache.org/jira/browse/SOLR-7652 https://issues.apache.org/jira/browse/SOLR-7652 — Erik Hatcher, Senior Solutions Architect http://www.lucidworks.com http://www.lucidworks.com/ On Jun 8, 2015, at 3:15 PM, Anshum Gupta ans...@anshumgupta.net wrote: +1 for bug fix releases. Though I'd say we can wait for a couple of days and give some time to others to fix other bugs that they'd want to get into 5.2.1, if there are any. P.S.: Who's the RM? :-) On Mon, Jun 8, 2015 at 10:06 AM, Shawn Heisey apa...@elyograg.org mailto:apa...@elyograg.org wrote: I broke it. SOLR-7588 fixes it. It hasn't been committed yet. The dataimport section of the admin UI doesn't work because I apparently put coffeescript into the admin UI instead of javascript. Is this a bad enough problem to warrant a bugfix release? Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org mailto:dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org mailto:dev-h...@lucene.apache.org -- Anshum Gupta
[JENKINS] Lucene-Solr-SmokeRelease-5.2 - Build # 14 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.2/14/ No tests ran. Build Log: [...truncated 62946 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/lucene/build/smokeTestRelease/dist [copy] Copying 446 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.7 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/lucene/build/smokeTestRelease/dist/... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.01 sec (8.9 MB/sec) [smoker] check changes HTML... [smoker] download lucene-5.2.0-src.tgz... [smoker] 28.3 MB in 0.04 sec (739.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.2.0.tgz... [smoker] 65.2 MB in 0.09 sec (698.7 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.2.0.zip... [smoker] 75.1 MB in 0.10 sec (722.5 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-5.2.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 5885 hits for query lucene [smoker] checkindex with 1.7... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.2.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 5885 hits for query lucene [smoker] checkindex with 1.7... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.2.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run ant validate [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.7... [smoker] got 209 hits for query lucene [smoker] checkindex with 1.7... [smoker] generate javadocs w/ Java 7... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] Releases that don't seem to be tested: [smoker] 5.2.0 [smoker] Traceback (most recent call last): [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 1535, in module [smoker] main() [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 1480, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' '.join(c.test_args)) [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 1518, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % version, svnRevision, version, testArgs, baseURL) [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 628, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, svnRevision, version, testArgs, tmpDir, baseURL) [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 809, in verifyUnpacked [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath) [smoker] File /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/dev-tools/scripts/smokeTestRelease.py, line 1473, in confirmAllReleasesAreTestedForBackCompat [smoker] raise RuntimeError('some releases are not tested by TestBackwardsCompatibility?') [smoker] RuntimeError: some releases are not tested by TestBackwardsCompatibility? BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.2/build.xml:421: exec returned: 1 Total time: 37 minutes 46 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4914 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4914/ Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading Error Message: Could not get expected value 'X val changed' for path 'x' full output: { responseHeader:{ status:0, QTime:0}, params:{wt:json}, context:{ webapp:/oub/x, path:/test1, httpMethod:GET}, class:org.apache.solr.core.BlobStoreTestRequestHandler, x:X val} Stack Trace: java.lang.AssertionError: Could not get expected value 'X val changed' for path 'x' full output: { responseHeader:{ status:0, QTime:0}, params:{wt:json}, context:{ webapp:/oub/x, path:/test1, httpMethod:GET}, class:org.apache.solr.core.BlobStoreTestRequestHandler, x:X val} at __randomizedtesting.SeedInfo.seed([7ED096144855CDAD:A69DBB43BF88680D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:410) at org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:260) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-6529) NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues
[ https://issues.apache.org/jira/browse/LUCENE-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated LUCENE-6529: - Attachment: LUCENE-6529.patch Fix a stupid bug in TestUninvertingReader that showed up after i increased the randomization. still hammering. NumericFields + SlowCompositeReaderWrapper + UninvertedReader + -Dtests.codec=random can results in incorrect SortedSetDocValues - Key: LUCENE-6529 URL: https://issues.apache.org/jira/browse/LUCENE-6529 Project: Lucene - Core Issue Type: Bug Reporter: Hoss Man Attachments: LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch, LUCENE-6529.patch Digging into SOLR-7631 and SOLR-7605 I became fairly confident that the only explanation of the behavior i was seeing was some sort of bug in either the randomized codec/postings-format or the UninvertedReader, that was only evident when two were combined and used on a multivalued Numeric Field using precision steps. But since i couldn't find any -Dtests.codec or -Dtests.postings.format options that would cause the bug 100% regardless of seed, I switched tactices and focused on reproducing the problem using UninvertedReader directly and checking the SortedSetDocValues.getValueCount(). I now have a test that fails frequently (and consistently for any seed i find), but only with -Dtests.codec=random -- override it with -Dtests.codec=default and everything works fine (based on the exhaustive testing I did in the linked issues, i suspect every named codec works fine - but i didn't re-do that testing here) The failures only seem to happen when checking the SortedSetDocValues.getValueCount() of a SlowCompositeReaderWrapper around the UninvertedReader -- which suggests the root bug may actually be in SlowCompositeReaderWrapper? (but still has some dependency on the random codec) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7638) Angular UI cloud pane broken
[ https://issues.apache.org/jira/browse/SOLR-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579883#comment-14579883 ] ASF subversion and git services commented on SOLR-7638: --- Commit 1684553 from [~ehatcher] in branch 'dev/trunk' [ https://svn.apache.org/r1684553 ] SOLR-7638: Fix new (Angular-based) admin UI Cloud pane Angular UI cloud pane broken Key: SOLR-7638 URL: https://issues.apache.org/jira/browse/SOLR-7638 Project: Solr Issue Type: Bug Affects Versions: 5.2 Reporter: Upayavira Assignee: Erik Hatcher Priority: Minor Attachments: Cloud Dump.png, SOLR-7638-simple.patch, SOLR-7638.patch I suspect the backend behind the Cloud pane changed meaning the cloud tab in angular doesn't work. Patch will come soon, -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org