[jira] [Commented] (LUCENE-5339) Simplify the facet module APIs
[ https://issues.apache.org/jira/browse/LUCENE-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859394#comment-13859394 ] Shai Erera commented on LUCENE-5339: Committed the packages changes. > Simplify the facet module APIs > -- > > Key: LUCENE-5339 > URL: https://issues.apache.org/jira/browse/LUCENE-5339 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/facet >Reporter: Michael McCandless >Assignee: Michael McCandless > Attachments: LUCENE-5339.patch, LUCENE-5339.patch, LUCENE-5339.patch > > > I'd like to explore simplifications to the facet module's APIs: I > think the current APIs are complex, and the addition of a new feature > (sparse faceting, LUCENE-5333) threatens to add even more classes > (e.g., FacetRequestBuilder). I think we can do better. > So, I've been prototyping some drastic changes; this is very > early/exploratory and I'm not sure where it'll wind up but I think the > new approach shows promise. > The big changes are: > * Instead of *FacetRequest/Params/Result, you directly instantiate > the classes that do facet counting (currently TaxonomyFacetCounts, > RangeFacetCounts or SortedSetDVFacetCounts), passing in the > SimpleFacetsCollector, and then you interact with those classes to > pull labels + values (topN under a path, sparse, specific labels). > * At index time, no more FacetIndexingParams/CategoryListParams; > instead, you make a new SimpleFacetFields and pass it the field it > should store facets + drill downs under. If you want more than > one CLI you create more than one instance of SimpleFacetFields. > * I added a simple schema, where you state which dimensions are > hierarchical or multi-valued. From this we decide how to index > the ordinals (no more OrdinalPolicy). > Sparse faceting is just another method (getAllDims), on both taxonomy > & ssdv facet classes. > I haven't created a common base class / interface for all of the > search-time facet classes, but I think this may be possible/clean, and > perhaps useful for drill sideways. > All the new classes are under oal.facet.simple.*. > Lots of things that don't work yet: drill sideways, complements, > associations, sampling, partitions, etc. This is just a start ... -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5339) Simplify the facet module APIs
[ https://issues.apache.org/jira/browse/LUCENE-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859393#comment-13859393 ] ASF subversion and git services commented on LUCENE-5339: - Commit 1554379 from [~shaie] in branch 'dev/branches/lucene5339' [ https://svn.apache.org/r1554379 ] LUCENE-5339: organize packages > Simplify the facet module APIs > -- > > Key: LUCENE-5339 > URL: https://issues.apache.org/jira/browse/LUCENE-5339 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/facet >Reporter: Michael McCandless >Assignee: Michael McCandless > Attachments: LUCENE-5339.patch, LUCENE-5339.patch, LUCENE-5339.patch > > > I'd like to explore simplifications to the facet module's APIs: I > think the current APIs are complex, and the addition of a new feature > (sparse faceting, LUCENE-5333) threatens to add even more classes > (e.g., FacetRequestBuilder). I think we can do better. > So, I've been prototyping some drastic changes; this is very > early/exploratory and I'm not sure where it'll wind up but I think the > new approach shows promise. > The big changes are: > * Instead of *FacetRequest/Params/Result, you directly instantiate > the classes that do facet counting (currently TaxonomyFacetCounts, > RangeFacetCounts or SortedSetDVFacetCounts), passing in the > SimpleFacetsCollector, and then you interact with those classes to > pull labels + values (topN under a path, sparse, specific labels). > * At index time, no more FacetIndexingParams/CategoryListParams; > instead, you make a new SimpleFacetFields and pass it the field it > should store facets + drill downs under. If you want more than > one CLI you create more than one instance of SimpleFacetFields. > * I added a simple schema, where you state which dimensions are > hierarchical or multi-valued. From this we decide how to index > the ordinals (no more OrdinalPolicy). > Sparse faceting is just another method (getAllDims), on both taxonomy > & ssdv facet classes. > I haven't created a common base class / interface for all of the > search-time facet classes, but I think this may be possible/clean, and > perhaps useful for drill sideways. > All the new classes are under oal.facet.simple.*. > Lots of things that don't work yet: drill sideways, complements, > associations, sampling, partitions, etc. This is just a start ... -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5339) Simplify the facet module APIs
[ https://issues.apache.org/jira/browse/LUCENE-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859373#comment-13859373 ] ASF subversion and git services commented on LUCENE-5339: - Commit 1554372 from [~shaie] in branch 'dev/branches/lucene5339' [ https://svn.apache.org/r1554372 ] LUCENE-5339: handle warnings and javadoc errors > Simplify the facet module APIs > -- > > Key: LUCENE-5339 > URL: https://issues.apache.org/jira/browse/LUCENE-5339 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/facet >Reporter: Michael McCandless >Assignee: Michael McCandless > Attachments: LUCENE-5339.patch, LUCENE-5339.patch, LUCENE-5339.patch > > > I'd like to explore simplifications to the facet module's APIs: I > think the current APIs are complex, and the addition of a new feature > (sparse faceting, LUCENE-5333) threatens to add even more classes > (e.g., FacetRequestBuilder). I think we can do better. > So, I've been prototyping some drastic changes; this is very > early/exploratory and I'm not sure where it'll wind up but I think the > new approach shows promise. > The big changes are: > * Instead of *FacetRequest/Params/Result, you directly instantiate > the classes that do facet counting (currently TaxonomyFacetCounts, > RangeFacetCounts or SortedSetDVFacetCounts), passing in the > SimpleFacetsCollector, and then you interact with those classes to > pull labels + values (topN under a path, sparse, specific labels). > * At index time, no more FacetIndexingParams/CategoryListParams; > instead, you make a new SimpleFacetFields and pass it the field it > should store facets + drill downs under. If you want more than > one CLI you create more than one instance of SimpleFacetFields. > * I added a simple schema, where you state which dimensions are > hierarchical or multi-valued. From this we decide how to index > the ordinals (no more OrdinalPolicy). > Sparse faceting is just another method (getAllDims), on both taxonomy > & ssdv facet classes. > I haven't created a common base class / interface for all of the > search-time facet classes, but I think this may be possible/clean, and > perhaps useful for drill sideways. > All the new classes are under oal.facet.simple.*. > Lots of things that don't work yet: drill sideways, complements, > associations, sampling, partitions, etc. This is just a start ... -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859338#comment-13859338 ] Noble Paul edited comment on SOLR-5580 at 12/31/13 5:29 AM: Mark, This issue is about an NPE . If we fix this NPE we should be good to go. The wholesale reverting of SOLR-5311 is beyond the scope of this issue. SOLR-5311 has already been released in 4.6 and reverting that code would be a regression in 4.6.1. We can fix the NPE right away and make the core admin API work and this can be closed. I can take care of that right away. If we need a discussion over how the implementation of SOLR-5311 should be, it can be done after reopening SOLR-5311 was (Author: noble.paul): Mark, This issue is about an NPE . If we fix this NPE we should be good to go. The wholesale reverting of SOLR-5311 is beyond the scope of this issue. We can fix the NPE right away and make the core admin API work and this can be closed. I can take care of that right away. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859338#comment-13859338 ] Noble Paul commented on SOLR-5580: -- Mark, This issue is about an NPE . If we fix this NPE we should be good to go. The wholesale reverting of SOLR-5311 is beyond the scope of this issue. We can fix the NPE right away and make the core admin API work and this can be closed. I can take care of that right away. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859292#comment-13859292 ] Mark Miller edited comment on SOLR-5311 at 12/31/13 3:42 AM: - This is part of a larger direction we have been working towards, which is essentially making ZooKeeper the truth. With the current SolrCloud, you cannot do this like you have. The Core Admin API, and pre configured cores, and the collections API all need to work in concert. That makes this approach a complete no go right now. The path I have been working towards with the Collections API is a new mode where everything is handled by the Collections API. In this case, it will not be valid for users to try and mess with things at a per core level. In this way, the cluster can truly match the truth in ZooKeeper because both the nodes and the Overseer can work together to keep the truth enforced. This includes things like being able to easily change the replication factor for a collection, or add a new host that automatically gets used depending on your settings. You should not have to address individual nodes to manage collections, nor should your replicationFactor stop mattering simply because you added another core via core admin. To me, this is the ultimate cloud situation. The system needs full control. We can add the ability to override or prefer certain things, but in general, we want to get to the point of optionally have the cluster mostly managed for you given some simple directectives. Of course, I think it should be implemented as a bunch of option features. You should also be easy to really lock things done unless you manage things manually. All of this requires we have a mode a user can decide to use (the collections api, perhaps with an option for back compat) so that we are in control of everything. We know when a collection is created and deleted - it won't be able to just pop back into existence. Until we have this special mode, the way that we had to build this, lots of historical reasons, we currently have to support what we have with pre configured cores and core admin and the collections api. This is silly form a user perspective though. It can all be done much nicer with just a collections API that doesn't have to be directed to any single node. Doing what you want to do in a back compat way is not some simple fix. We have been working towards this for a long time now - if you could just slap in a band aid and make it work like this, I would have done it a long time go. was (Author: markrmil...@gmail.com): This is part of a larger direction we have been working towards, which is essentially making ZooKeeper the truth. With the current SolrCloud, you cannot do this like you have. The Core Admin API, and pre configured cores, and the collections API all need to work in concert. That makes this approach a complete no go right now. The path I have been working towards with the Collections API is a new mode where everything is handled by the Collections API. In this case, it will not be valid for users to try and mess with things at a per core level. In this way, the cluster can truly match the truth in ZooKeeper because both the nodes and the Overseer can work together to keep the truth enforced. This includes things like being able to easily change the replication factor for a collection, or a new host that automatically gets used depending on your settings. You should not have to address individual nodes to manage collections, nor should your replicationFactor stop mattering simply because you added another core via core admin. To me, this is the ultimate cloud situation. The system needs full control. We can add the ability to override or prefer certain things, but in general, we want to get to the point of optionally have the cluster mostly managed for you given some simple directectives. Of course, I think it should be implemented as a bunch of option features. You should also be easy to really lock things done unless you manage things manually. All of this requires we have a mode a user can decide to use (the collections api, perhaps with an option for back compat) so that we are in control of everything. We know when a collection is created and deleted - it won't be able to just pop back into existence. Until we have this special mode, the way that we had to build this, lots of historical reasons, we currently have to support what we have with pre configured cores and core admin and the collections api. This is silly form a user perspective though. It can all be done much nicer with just a collections API that doesn't have to be directed to any single node. Doing what you want to do in a back compat way is not some simple fix. We have been working towards this for a long time now - if you could just slap in a band aid and mak
[jira] [Assigned] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-5580: - Assignee: Mark Miller (was: Noble Paul) > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859292#comment-13859292 ] Mark Miller commented on SOLR-5311: --- This is part of a larger direction we have been working towards, which is essentially making ZooKeeper the truth. With the current SolrCloud, you cannot do this like you have. The Core Admin API, and pre configured cores, and the collections API all need to work in concert. That makes this approach a complete no go right now. The path I have been working towards with the Collections API is a new mode where everything is handled by the Collections API. In this case, it will not be valid for users to try and mess with things at a per core level. In this way, the cluster can truly match the truth in ZooKeeper because both the nodes and the Overseer can work together to keep the truth enforced. This includes things like being able to easily change the replication factor for a collection, or a new host that automatically gets used depending on your settings. You should not have to address individual nodes to manage collections, nor should your replicationFactor stop mattering simply because you added another core via core admin. To me, this is the ultimate cloud situation. The system needs full control. We can add the ability to override or prefer certain things, but in general, we want to get to the point of optionally have the cluster mostly managed for you given some simple directectives. Of course, I think it should be implemented as a bunch of option features. You should also be easy to really lock things done unless you manage things manually. All of this requires we have a mode a user can decide to use (the collections api, perhaps with an option for back compat) so that we are in control of everything. We know when a collection is created and deleted - it won't be able to just pop back into existence. Until we have this special mode, the way that we had to build this, lots of historical reasons, we currently have to support what we have with pre configured cores and core admin and the collections api. This is silly form a user perspective though. It can all be done much nicer with just a collections API that doesn't have to be directed to any single node. Doing what you want to do in a back compat way is not some simple fix. We have been working towards this for a long time now - if you could just slap in a band aid and make it work like this, I would have done it a long time go. > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859288#comment-13859288 ] Mark Miller commented on SOLR-5580: --- bq. What needs to be done before I can get this in? I'm happy to discuss that in the reopen JIRA issue around this feature. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Noble Paul > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859287#comment-13859287 ] Mark Miller commented on SOLR-5580: --- Make another issue please. This is a bug that is assigned to me that I have fixed. Either work on the reopened original issue or make a new one. The work necessary to do this feature properly is fairly substantial, and I'll be releasing this in 4.6.1 any time now. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Noble Paul > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-5580: Assignee: Noble Paul (was: Mark Miller) > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Noble Paul > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859285#comment-13859285 ] Noble Paul commented on SOLR-5580: -- My take on this. Let's fix this in a backcompat way so that deletereplica is implemented consistently > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3583) Percentiles for facets, pivot facets, and distributed pivot facets
[ https://issues.apache.org/jira/browse/SOLR-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859258#comment-13859258 ] Terrance A. Snyder commented on SOLR-3583: -- Raising from the dead again - decided to go with the option to separate this out as a search component. Writing a brand new cut that also does Block Join support. Current tests checked in and open sourced for Apache. It's a simple search component so registering it in any SOLR 4.5.X and above should work. Examples, JUnit Tests, etc are provided below. https://github.com/terrancesnyder/solr-groupby-component With Block Join and this - we start to get to a nice place - but the FacetComponent is far too big for my liking to mess with, it also requires a complete rebuild of solr to enhance. My vote is to move this into a more modular component until things settle down. Please let me know if this is way off - also what the process would be to re-integrate - since this a module like carrot and others, I'd imagine it would be (at least in the beginning) for people to adopt by modifying their solr config and downloading the JAR. In any case - roadmap is to support range, distributed, and more block join fun. I'm in google plus, just add me to circles and we can chat. > Percentiles for facets, pivot facets, and distributed pivot facets > -- > > Key: SOLR-3583 > URL: https://issues.apache.org/jira/browse/SOLR-3583 > Project: Solr > Issue Type: Improvement >Reporter: Chris Russell >Priority: Minor > Labels: newbie, patch > Fix For: 4.6 > > Attachments: SOLR-3583.patch, SOLR-3583.patch, SOLR-3583.patch, > SOLR-3583.patch > > > Built on top of SOLR-2894, this patch adds percentiles and averages to > facets, pivot facets, and distributed pivot facets by making use of range > facet internals. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1172 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1172/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 10386 lines...] [junit4] JVM J0: stdout was not empty, see: /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20131231_001145_807.sysout [junit4] >>> JVM J0: stdout (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # SIGSEGV (0xb) at pc=0x00012fb9259f, pid=212, tid=180063 [junit4] # [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 1.7.0_45-b18) [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode bsd-amd64 ) [junit4] # Problematic frame: [junit4] # C 0x00012fb9259f [junit4] # [junit4] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/hs_err_pid212.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.sun.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J0: EOF [...truncated 1 lines...] [junit4] ERROR: JVM J0 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/bin/java -XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps -Dtests.prefix=tests -Dtests.seed=52FE3B5E51CDFFB5 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queries/lucene-querie
[jira] [Commented] (SOLR-5592) Possible deadlock on startup if using warming queries and certain components and jmx.
[ https://issues.apache.org/jira/browse/SOLR-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859127#comment-13859127 ] Mark Miller commented on SOLR-5592: --- Stack traces from Solr 4.4 with some backports on it. It is deadlocked in getSearcher. {noformat} "coreLoadExecutor-4-thread-1" prio=10 tid=0x7fd0a84cc000 nid=0x5991 in Object.wait() [0x7fd0ac9e2000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0xd0772c60> (a java.lang.Object) at java.lang.Object.wait(Object.java:485) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1596) - locked <0xd0772c60> (a java.lang.Object) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1405) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1340) at org.apache.solr.handler.ReplicationHandler.getIndexVersion(ReplicationHandler.java:546) at org.apache.solr.handler.ReplicationHandler.getStatistics(ReplicationHandler.java:563) at org.apache.solr.core.JmxMonitoredMap$SolrDynamicMBean.getMBeanInfo(JmxMonitoredMap.java:231) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:321) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:307) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:140) at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:51) at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:647) at org.apache.solr.core.SolrCore.(SolrCore.java:855) at org.apache.solr.core.SolrCore.(SolrCore.java:634) at org.apache.solr.core.ZkContainer.createFromZk(ZkContainer.java:270) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:678) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:387) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:379) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) "searcherExecutor-5-thread-1" prio=10 tid=0x7fd098138800 nid=0x599a in Object.wait() [0x7fd073ffb000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0xd0772c60> (a java.lang.Object) at java.lang.Object.wait(Object.java:485) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1596) - locked <0xd0772c60> (a java.lang.Object) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1405) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1340) at org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:96) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:237) at org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:146) at org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:229) at org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:196) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1909) at org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:64) at org.apache.solr.core.SolrCore$5.call(SolrCore.java:1698) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) > Possible deadlock on startup if using warming queries and certain components > and jmx. > - > >
[jira] [Commented] (SOLR-5592) Possible deadlock on startup if using warming queries and certain components and jmx.
[ https://issues.apache.org/jira/browse/SOLR-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859125#comment-13859125 ] Mark Miller commented on SOLR-5592: --- You can have: {noformat} SolrCore Constructor getSearcher trigger first searcher listener asynchronously warming queries, components, perhaps getSearcher register JMX, perhaps getSearcher {noformat} > Possible deadlock on startup if using warming queries and certain components > and jmx. > - > > Key: SOLR-5592 > URL: https://issues.apache.org/jira/browse/SOLR-5592 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 5.0, 4.7, 4.6.1 > > > Seeing the case with a spellcheck component. > We attempt to register JMX properties after we open the first searcher in > core init, but we can still have a race for the first searcher open because > first we call getSearcher and it can trigger concurrent warming queries that > can trigger components that call getSearcher. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5592) Possible deadlock on startup if using warming queries and certain components and jmx.
[ https://issues.apache.org/jira/browse/SOLR-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-5592: -- Summary: Possible deadlock on startup if using warming queries and certain components and jmx. (was: Possible deadlock on startup if using certain components and jmx.) > Possible deadlock on startup if using warming queries and certain components > and jmx. > - > > Key: SOLR-5592 > URL: https://issues.apache.org/jira/browse/SOLR-5592 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 5.0, 4.7, 4.6.1 > > > Seeing the case with a spellcheck component. > We attempt to register JMX properties after we open the first searcher in > core init, but we can still have a race for the first searcher open because > first we call getSearcher and it can trigger concurrent warming queries that > can trigger components that call getSearcher. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-5592) Possible deadlock on startup if using certain components and jmx.
Mark Miller created SOLR-5592: - Summary: Possible deadlock on startup if using certain components and jmx. Key: SOLR-5592 URL: https://issues.apache.org/jira/browse/SOLR-5592 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, 4.7, 4.6.1 Seeing the case with a spellcheck component. We attempt to register JMX properties after we open the first searcher in core init, but we can still have a race for the first searcher open because first we call getSearcher and it can trigger concurrent warming queries that can trigger components that call getSearcher. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5581) Give ZkCLI the ability to get files
[ https://issues.apache.org/jira/browse/SOLR-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859059#comment-13859059 ] ASF subversion and git services commented on SOLR-5581: --- Commit 1554311 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1554311 ] SOLR-5581: Give ZkCLI the ability to get files. > Give ZkCLI the ability to get files > --- > > Key: SOLR-5581 > URL: https://issues.apache.org/jira/browse/SOLR-5581 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 5.0, 4.7 >Reporter: Gregory Chanan >Assignee: Mark Miller >Priority: Minor > Fix For: 5.0, 4.7 > > Attachments: SOLR-5581.patch > > > Today, the ZkCli has the ability to put files to Zk (via put or putfile), but > not get files. This would be useful for me along with SOLR-5556, i.e. I > could save the old solr.xml and replace it with a new one. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5581) Give ZkCLI the ability to get files
[ https://issues.apache.org/jira/browse/SOLR-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859035#comment-13859035 ] ASF subversion and git services commented on SOLR-5581: --- Commit 1554304 from [~markrmil...@gmail.com] in branch 'dev/trunk' [ https://svn.apache.org/r1554304 ] SOLR-5581: Give ZkCLI the ability to get files. > Give ZkCLI the ability to get files > --- > > Key: SOLR-5581 > URL: https://issues.apache.org/jira/browse/SOLR-5581 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 5.0, 4.7 >Reporter: Gregory Chanan >Assignee: Mark Miller >Priority: Minor > Fix For: 5.0, 4.7 > > Attachments: SOLR-5581.patch > > > Today, the ZkCli has the ability to put files to Zk (via put or putfile), but > not get files. This would be useful for me along with SOLR-5556, i.e. I > could save the old solr.xml and replace it with a new one. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1604) Wildcards, ORs etc inside Phrase Queries
[ https://issues.apache.org/jira/browse/SOLR-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859022#comment-13859022 ] suren commented on SOLR-1604: - Ahmet, I am using solr 4.3.1. do i still need to apply this patch ? if yes please tell me what patch to apply, I see lot of patches here, not sure which patch for what version of solr. > Wildcards, ORs etc inside Phrase Queries > > > Key: SOLR-1604 > URL: https://issues.apache.org/jira/browse/SOLR-1604 > Project: Solr > Issue Type: Improvement > Components: query parsers, search >Affects Versions: 1.4 >Reporter: Ahmet Arslan >Priority: Minor > Attachments: ASF.LICENSE.NOT.GRANTED--ComplexPhrase.zip, > ComplexPhrase-4.2.1.zip, ComplexPhrase.zip, ComplexPhrase.zip, > ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, > ComplexPhraseQueryParser.java, ComplexPhrase_solr_3.4.zip, > SOLR-1604-alternative.patch, SOLR-1604.patch, SOLR-1604.patch > > > Solr Plugin for ComplexPhraseQueryParser (LUCENE-1486) which supports > wildcards, ORs, ranges, fuzzies inside phrase queries. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Problem using generic types?
Andi, great to hear that you could reproduce it. I'm very thankful if you could have a look at it, I've been struggling to understand how the machinery behind this works, but far from it still.. Best Regards /Petrus On Mon, Dec 30, 2013 at 10:36 AM, Andi Vajda wrote: > > On Sun, 29 Dec 2013, Petrus Hyvönen wrote: > > I have distilled the library that I have some trouble with and I think I >> have an example that is failing due to same problem I think. I am not good >> in java, but have tried to follow the logic from the library I'm wrapping. >> The function of the example does not make sense in itself. >> >> public class SimpleClass { >> public SimpleClass() { >> System.out.println("Created SimpleClass"); >> } >>public T return_null() { >>return null; >>} >> >> } >> >> >> >> public class SimpleClass2 extends SimpleClass{ >> >> public SimpleClass2(){} >>public void testInJava(){ >>System.out.println(this.return_null()); >>} >> } >> >> >> It seems to me that there is some problem with methods inherited that >> returns a generic type, failing in wrapType when this is to be wrapped. >> >> The python script that fails: >> >> a= SimpleClass() >> print a.return_null() >> >> b = SimpleClass2() >> b.testInJava() >> >> print b.return_null() #Fails in wrapType >> >> I don't know if the return null is a bad thing to do in java, but the >> error >> seems very similar to what I experience in the larger library. I have a >> skeleton of that this is slightly larger, but not returning null, but >> trying to keep the lenght of example low :) >> >> Any comments highly appriciated :) >> > > I've been able to reproduce the problem. > Thank you for providing an isolated test case ! > > Andi.. > > > >> best Regards >> /Petrus >> >> >> >> >> >> >> On Fri, Dec 27, 2013 at 6:25 PM, Andi Vajda wrote: >> >> >>> On Dec 27, 2013, at 17:36, Petrus Hyvönen >>> wrote: >>> Dear Andi, I am working on debugging the failure and try to understand a bit how JCC works internally. I haven't gone very far but in case you have some pointers from these early debugging sessions I would be very thankful. I know it's complex, and I should try to make some smaller test cases, but >>> I >>> don't really have a grasp yet where the problem might be. Writing this might help me also to get some structure in my thinking :) The main crash seems to be in the last line, wrapType(), of __wrap__.cpp: static PyObject *t_AbstractReconfigurableDetector_withHandler(t_ >>> AbstractReconfigurableDetector >>> *self, PyObject *arg) { ::org::orekit::propagation::events::handlers::EventHandler a0((jobject) NULL); PyTypeObject **p0; ::org::orekit::propagation::events::EventDetector result((jobject) NULL); if (!parseArg(arg, "K", ::org::orekit::propagation::events::handlers:: >>> EventHandler::initializeClass, >>> &a0, &p0, ::org::orekit::propagation::events::handlers::t_ >>> EventHandler::parameters_)) >>> { OBJ_CALL(result = self->object.withHandler(a0)); return self->parameters[0] != NULL ? wrapType(self->parameters[0], result.this$) : ::org::orekit::propagation::events::t_EventDetector::wrap_ Object(result); } The parameters[0] does not seem to be null, but neither is it a valid object, in my debugger it says 0xbaadf00d {ob_refcnt=??? ob_type=??? ob_size=??? ...} _typeobject *, wrapType is called and when trying to access the wrapfn it crashes. The main python lines are: tmp1 = ElevationDetector(sta1Frame)# >>> ElevationDetector >>> is a java public class ElevationDetector extends AbstractReconfigurableDetector hand = ContinueOnEvent().of_(ElevationDetector) # a java ContinueOnEvent object elDetector = tmp1.withHandler(hand) #Crash. withHandler is a method that is inherited from AbstractReconfigurableDetector to >>> ElevationDetector >>> This crashes when interactively entered on the python prompt (or in other interactive consoles), but seems to work if executed directly without interactivity. This difference makes me think that it might be something with garbage collection, but don't know. Any comments appriciated, I know this is likely very difficult to comment on as it's not very encapsulated. >>> >>> Right, so unless you can isolate this into something I can reproduce, I'm >>> afraid there isn't much I can comment. >>> It is quite likely that by the time you have that reproducible test case >>> ready, you also have the solution to the problem. Or I might be able to >>> help then... >>> >>> Andi.. >>> >>> Regards /Petrus >>
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859012#comment-13859012 ] Noble Paul commented on SOLR-5580: -- bq. the problem is that it is ahead of it's time What needs to be done before I can get this in? bq.Yes, we need this - but we have to do it right. If we do it right how should it look like? We can't keep it an open item > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859008#comment-13859008 ] Mark Miller commented on SOLR-5580: --- The Core Admin also lets you set an initial coreNodeName! And it has the same use case. bq. But here the question is not whether we should use core admin or not. I feel that the API to add a replica on a particular node would be pretty ugly on collections API and it looks more elegant on core admin API. That needs a discussion in it's own issue. A lot of things make more sense from a collection perspective (eg, I want to change the replicationFactor). And we need a way to easily distinguish between a operations meant for a Collections API collection and those that are not. How we implement that is still open, but I would initially lean towards making the collections api powerful enough not to need the core admin and then ban it on collection api collections. bq. I really didn't want to have a half broken deletereplica API We both want the same functionality - the problem is that it is ahead of it's time. The way it has been implemented does not jive with the current system. Yes, we need this - but we have to do it right. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859009#comment-13859009 ] Lance Norskog commented on LUCENE-2899: --- JWNL is WordNet. Lucene has a WordNet parser for use as a synonym filter. http://lucene.apache.org/core/4_0_0/analyzers-common/index.html?org/apache/lucene/analysis/synonym/SynonymMap.html I don't know how to use this from a Solr filter factory. Please ask this on the Solr mailing list. > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.7 > > Attachments: LUCENE-2899-RJN.patch, LUCENE-2899.patch, > OpenNLPFilter.java, OpenNLPTokenizer.java > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859006#comment-13859006 ] Lance Norskog edited comment on LUCENE-2899 at 12/30/13 7:19 PM: - All fair criticisms. About UIMA: clearly it is much more advanced than this design, but I'm not smart enough to use it :). I've tried to put together something useful (a few times) and each time was completely confused. I learn by example, and the examples are limited. Also there is very little traffic on the mailing lists etc. about UIMA. About payloads v.s. internal attributes: the examples don't use this feature, but payloads are stored in the index. This supports a question-answering system. Add PERSON payloads with all records, then search for "word X AND 'payload PERSON anywhere'" when someone says "who is X". This does the tagging during indexing, but not searching. A better design would be to add PERSON as a synonym rather than a payload. I also don't see much traffic about payloads. About doing this in the analysis pipeline v.s. upstream: yes, upstream request processors are the right place for this. In Solr. URPs don't exist in ES or just plain Lucene coding. was (Author: lancenorskog): All fair criticisms. About UIMA: clearly it is much more advanced than this design, but I'm not smart enough to use it :) I've tried to put together something useful (a few times) and each time was completely confused. I learn by example, and the examples are limited. Also there is very little traffic on the mailing lists etc. about UIMA. About payloads v.s. internal attributes: the examples don't use this feature, but payloads are stored in the index. This supports a question-answering system. Add PERSON payloads with all records, then search for "word X AND 'payload PERSON anywhere'" when someone says "who is X". This does the tagging during indexing, but not searching. A better design would be to add PERSON as a synonym rather than a payload. I also don't see much traffic about payloads. About doing this in the analysis pipeline v.s. upstream: yes, upstream request processors are the right place for this. In Solr. URPs don't exist in ES or just plain Lucene coding. > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.7 > > Attachments: LUCENE-2899-RJN.patch, LUCENE-2899.patch, > OpenNLPFilter.java, OpenNLPTokenizer.java > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859006#comment-13859006 ] Lance Norskog commented on LUCENE-2899: --- All fair criticisms. About UIMA: clearly it is much more advanced than this design, but I'm not smart enough to use it :) I've tried to put together something useful (a few times) and each time was completely confused. I learn by example, and the examples are limited. Also there is very little traffic on the mailing lists etc. about UIMA. About payloads v.s. internal attributes: the examples don't use this feature, but payloads are stored in the index. This supports a question-answering system. Add PERSON payloads with all records, then search for "word X AND 'payload PERSON anywhere'" when someone says "who is X". This does the tagging during indexing, but not searching. A better design would be to add PERSON as a synonym rather than a payload. I also don't see much traffic about payloads. About doing this in the analysis pipeline v.s. upstream: yes, upstream request processors are the right place for this. In Solr. URPs don't exist in ES or just plain Lucene coding. > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.7 > > Attachments: LUCENE-2899-RJN.patch, LUCENE-2899.patch, > OpenNLPFilter.java, OpenNLPTokenizer.java > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858999#comment-13858999 ] Noble Paul commented on SOLR-5580: -- It is OK to use the cor admin API sometimes . It is fine. But editing solr.xml or adding system properties at node startup is something we should actively strive to avoid. But here the question is not whether we should use core admin or not. I feel that the API to add a replica on a particular node would be pretty ugly on collections API and it looks more elegant on core admin API. I really didn't want to have a half broken deletereplica API > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858996#comment-13858996 ] Mark Miller commented on SOLR-5580: --- bq. this feature is a problem. Or at least the way it was implemented. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858992#comment-13858992 ] Mark Miller commented on SOLR-5580: --- The collections API is not ready for that functionality - you still have to use the collections API in concert with the core admin API to do many things. Until the Collections API can do everything without the core admin API, this feature is a problem. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858986#comment-13858986 ] Noble Paul commented on SOLR-5580: -- I guess you missed something. If you added the core through solr.xml it is for a collection that is 'autoCreated' . So I enabled this feature only for collections created through collections API. For others the legacy behavior is not altered .So , they are not really editable if the cores are created through API. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5380) PagingFieldCollector should track previous page hits
[ https://issues.apache.org/jira/browse/LUCENE-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated LUCENE-5380: --- Attachment: LUCENE-5380.patch Simple patch, no tests yet > PagingFieldCollector should track previous page hits > > > Key: LUCENE-5380 > URL: https://issues.apache.org/jira/browse/LUCENE-5380 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Steve Rowe >Priority: Minor > Attachments: LUCENE-5380.patch > > > PagingFieldCollector partitions all hits into three buckets: previous page > hits, collected (current page) hits, and non-competitive (following page) > hits. Total hits and collected hits are tracked, but neither non-competitive > hits nor previous page hits are tracked, so previous page hits can't be > derived from the total and collected hits. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-5380) PagingFieldCollector should track previous page hits
Steve Rowe created LUCENE-5380: -- Summary: PagingFieldCollector should track previous page hits Key: LUCENE-5380 URL: https://issues.apache.org/jira/browse/LUCENE-5380 Project: Lucene - Core Issue Type: Improvement Components: core/search Reporter: Steve Rowe Priority: Minor PagingFieldCollector partitions all hits into three buckets: previous page hits, collected (current page) hits, and non-competitive (following page) hits. Total hits and collected hits are tracked, but neither non-competitive hits nor previous page hits are tracked, so previous page hits can't be derived from the total and collected hits. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858970#comment-13858970 ] Mark Miller edited comment on SOLR-5580 at 12/30/13 6:27 PM: - bq.I agree with you . But what is the harm in making the coreNodeName editable? If he is sure about what he is doing , it will work. If he is fiddling with stuff any of those properties can screw up his system. I really can't see the difference between someone editing collection, shard, or coreNodeName It *is* currently editable. The way you did things, you would need it to not be editable. So no way it should be in solr.xml. You can configure a new core in solr.xml and set a shard id and a collection - but now you are going to say you can't set some of those settings (coreNodeName), we are just storing them there internally and it's not for you to preconfigure or edit? Now we have some special config in solr.xml that is not for users when everything else is? No way, -1. bq. If and when we are ready , how do you plan to make the switch? Can we introduce the switch right away, so that the users who want the new way can go that way. We will need to support non Collections API for some time. For the Collections API, its just going to become more capable - if it requires it, some of those capabilities will require turning on new options if you are using old config. It will all be pretty easy, other than when and if we drop the non Collections API support entirely. was (Author: markrmil...@gmail.com): bq.I agree with you . But what is the harm in making the coreNodeName editable? If he is sure about what he is doing , it will work. If he is fiddling with stuff any of those properties can screw up his system. I really can't see the difference between someone editing collection, shard, or coreNodeName It *is* currently editable. The way you did things, you would need it to not be editable. So no way it should be in solr.xml. You can configure a new core in solr.xml and set a shard id and a collection - but now you are going to say you can't set some of those settings (coreNodeName), we are just storing them there internally and it's not for you to preconfigure or edit? No we have some special config in solr.xml that is not for users when everything else is? No way, -1. bq. If and when we are ready , how do you plan to make the switch? Can we introduce the switch right away, so that the users who want the new way can go that way. We will need to support non Collections API for some time. For the Collections API, its just going to become more capable - if it requires it, some of those capabilities will require turning on new options if you are using old config. It will all be pretty easy, other than when and if we drop the non Collections API support entirely. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } >
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858970#comment-13858970 ] Mark Miller commented on SOLR-5580: --- bq.I agree with you . But what is the harm in making the coreNodeName editable? If he is sure about what he is doing , it will work. If he is fiddling with stuff any of those properties can screw up his system. I really can't see the difference between someone editing collection, shard, or coreNodeName It *is* currently editable. The way you did things, you would need it to not be editable. So no way it should be in solr.xml. You can configure a new core in solr.xml and set a shard id and a collection - but now you are going to say you can't set some of those settings (coreNodeName), we are just storing them there internally and it's not for you to preconfigure or edit? No we have some special config in solr.xml that is not for users when everything else is? No way, -1. bq. If and when we are ready , how do you plan to make the switch? Can we introduce the switch right away, so that the users who want the new way can go that way. We will need to support non Collections API for some time. For the Collections API, its just going to become more capable - if it requires it, some of those capabilities will require turning on new options if you are using old config. It will all be pretty easy, other than when and if we drop the non Collections API support entirely. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858968#comment-13858968 ] Noble Paul commented on SOLR-5580: -- bq.No...those are user configurable and should be...a user can always enter bad settings... I agree with you . But what is the harm in making the coreNodeName editable? If he is sure about what he is doing , it will work. If he is fiddling with stuff any of those properties can screw up his system. I really can't see the difference between someone editing collection, shard, or coreNodeName bq.No. That is the goal of finishing the Collections API. If and when we are ready , how do you plan to make the switch? Can we introduce the switch right away, so that the users who want the new way can go that way. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-5581) Give ZkCLI the ability to get files
[ https://issues.apache.org/jira/browse/SOLR-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-5581: - Assignee: Mark Miller > Give ZkCLI the ability to get files > --- > > Key: SOLR-5581 > URL: https://issues.apache.org/jira/browse/SOLR-5581 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 5.0, 4.7 >Reporter: Gregory Chanan >Assignee: Mark Miller >Priority: Minor > Fix For: 5.0, 4.7 > > Attachments: SOLR-5581.patch > > > Today, the ZkCli has the ability to put files to Zk (via put or putfile), but > not get files. This would be useful for me along with SOLR-5556, i.e. I > could save the old solr.xml and replace it with a new one. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858965#comment-13858965 ] Mark Miller commented on SOLR-5580: --- bq. Isn't this same? No...those are user configurable and should be...a user can always enter bad settings... bq. But we would expect the data in ZK to be the truth , right? No. That is the goal of finishing the Collections API. For the non Collections API, the truth is not currently ZK and it's not easy to make it so 100% - which is why I keep mentioning the collections API ... > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858958#comment-13858958 ] Noble Paul edited comment on SOLR-5311 at 12/30/13 6:04 PM: I'm not sure I understand you. I'm missing something I guess Thisproperty is persisted to solr.xml or core.properties . And that is how it works. Without that it can't work . I made the change so that it is persisted to solr.xml/core.properties was (Author: noble.paul): I'm not sure I understand you. I'm missing something I guess This property is a property is persisted to solr.xml or core.properties . And that is how it works. Without that it can't work . I made the change so that it is persisted to solr.xml/core.properties > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858958#comment-13858958 ] Noble Paul commented on SOLR-5311: -- I'm not sure I understand you. I'm missing something I guess This property is a property is persisted to solr.xml or core.properties . And that is how it works. Without that it can't work . I made the change so that it is persisted to solr.xml/core.properties > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858956#comment-13858956 ] Noble Paul commented on SOLR-5580: -- bq.A setting in solr.xml is user configurable by definition. Yes, but that is same for every other property like, collection, shard etc . Isn't this same? anyone editing those properties would end up screwing the cluster itself. But we would expect the data in ZK to be the truth , right? > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858947#comment-13858947 ] Mark Miller commented on SOLR-5580: --- A setting in solr.xml is user configurable by definition. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858945#comment-13858945 ] Noble Paul commented on SOLR-5580: -- I didn't quite get the solr.xml part. It is persisted , right? I was essentially trying to have to modes, one is a collection which got 'autoCreated' and the other is one which got created thru API ? are you saying se need a 3rd mode? > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858944#comment-13858944 ] Mark Miller commented on SOLR-5311: --- And as I mention in SOLR-5580, your code actually relies on coreNodeName being a configurable setting in solr.xml! It wouldn't work without it. This just was not implemented correctly. > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858940#comment-13858940 ] Mark Miller commented on SOLR-5311: --- It's in CHANGES, it's been noted in user list emails, it had a JIRA issue - it's a fully supported feature. Features are not determined by what has been documented or not. Take a look at the code and the test code - this is an explicit, supported, released, feature. > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858937#comment-13858937 ] Mark Miller edited comment on SOLR-5580 at 12/30/13 5:23 PM: - bq. was down was to avoid having another replica Yes, I know why you did it, I'm saying there are many problems with how you went about it. The entire reliance on the coreNodeName is incorrect. Like I said, even if you had said users can't specify it and ignored back compat, it can't be in solr.xml then. Your code only worked because it is a user setting that is persisted in solr.xml. Your goal is fine, the implementation is all wrong. While it could be corrected, I think it's much better to push on the collections api, rather than complicate what is now a simple mode that will eventually either become second class or be removed. We should not spend a lot of time making it do what it was not designed for from the start. The plan has always been the Collections API for this type of behavior. was (Author: markrmil...@gmail.com): bq. was down was to avoid having another replica Yes, I know why you did it, I'm saying there are many problems with how you went about it. The entire reliance on the coreNodeName is incorrect. Like I said, even if you had said users can't specify it and ignored back compat, it can't be in solr.xml then. Your code only worked because it is a user setting that is persisted in solr.xml. Your goal is fine, the implementation is all wrong. While it could be corrected, I think it's much better to push on the collections api, rather than complicated what is now a simple mode that will eventually either become second class or be removed. We should not spend a lot of time making it do what it was not designed for from the start. The plan has always been the Collections API for this type of behavior. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858937#comment-13858937 ] Mark Miller commented on SOLR-5580: --- bq. was down was to avoid having another replica Yes, I know why you did it, I'm saying there are many problems with how you went about it. The entire reliance on the coreNodeName is incorrect. Like I said, even if you had said users can't specify it and ignored back compat, it can't be in solr.xml then. Your code only worked because it is a user setting that is persisted in solr.xml. Your goal is fine, the implementation is all wrong. While it could be corrected, I think it's much better to push on the collections api, rather than complicated what is now a simple mode that will eventually either become second class or be removed. We should not spend a lot of time making it do what it was not designed for from the start. The plan has always been the Collections API for this type of behavior. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858935#comment-13858935 ] Noble Paul commented on SOLR-5311: -- Actually it was an undocumented feature. https://cwiki.apache.org/confluence/display/solr/CoreAdminHandler+Parameters+and+Usage > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858935#comment-13858935 ] Noble Paul edited comment on SOLR-5311 at 12/30/13 5:18 PM: Actually it was an undocumented feature. https://cwiki.apache.org/confluence/display/solr/CoreAdminHandler+Parameters+and+Usage#CoreAdminHandlerParametersandUsage-{{CREATE}} was (Author: noble.paul): Actually it was an undocumented feature. https://cwiki.apache.org/confluence/display/solr/CoreAdminHandler+Parameters+and+Usage > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858906#comment-13858906 ] Noble Paul commented on SOLR-5580: -- bq.Register the core. If you don't use the Collections API, the behavior is simple and straightforward. The problem is , the deletereplica did not help. The collections api should get importance than cores going up and down. The reason why I called the deletereplica when the core was down was to avoid having another replica (and to clean up clusterstate) . One of my purpose is defeated bq.For a much better experience, we should finish the collections api I completely agree with you. We are pursuing them one by one. One day I want collections API to be the definitive way to achieve almost anything on SolrCloud. So I want collection API's to take precedence over others I think the user has a problem because we didn't document this new behavior , mea culpa > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858902#comment-13858902 ] Mark Miller commented on SOLR-5580: --- bq. The problem is that, If I removed a replica from clusterstate and then the core came up, What is the desired behavior? register the core or unload the core? Register the core. If you don't use the Collections API, the behavior is simple and straightforward. For a much better experience, we should finish the collections api, so we can deprecate dealing with individual cores. bq. The use case is, A node went down and I don't need to replace it with another node because I have enough replicas. Now I need to clean up the clusterstate .Currently there is no way to achieve it That's why SOLR-5310 still makes sense and should still work fine for this case... > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858901#comment-13858901 ] Mark Miller commented on SOLR-5580: --- If you do want to create some marker that gets saved out so that a SolrCore can track if it had been removed or not, you would need to do it to a spot that is not a user editable param... The only reason the previous scheme worked at all is because the coreNodeName is user editable and is saved out to solr.xml - into a user overrideable field. You would need to save that information to a system only storage location. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858900#comment-13858900 ] Noble Paul edited comment on SOLR-5580 at 12/30/13 4:17 PM: bq.It's just a matter of opinion Yes, you are right. the point is , I think people don't really have to think that replicas have a name , they just need to have enough replicas for a given slice. bq.I don't believe that - it should be fine to still have a command that removes a replica from the clusterstate. The problem is that, If I removed a replica from clusterstate and then the core came up, What is the desired behavior? register the core or unload the core? bq.Then perhaps they should not be implemented and this energy is better spent working towards a fully functional Collections API. SOLR-5310 is a step towards a fully functional collections API. The use case is, A node went down and I don't need to replace it with another node because I have enough replicas. Now I need to clean up the clusterstate .Currently there is no way to achieve it bq.uh...yes it is used... I',m sorry, I meant it is not used anywhere BY THE USER My intent was not to break backcompat . But it happened because I didn't know this particular usecase. Let us see what is the best solution for this? Let us answer a few questions to ourselves * If we are designing the system today which way would you choose? a deletereplica API or a create core API to 'replace' a core. So what is the way forward? * implement deletereplica API , but make the clusterstate slightly ugly for backcompat was (Author: noble.paul): bq.It's just a matter of opinion Yes, you are right. the point is , I think people don't really have to think that replicas have a name , they just need to have enough replicas for a given slice. bq.I don't believe that - it should be fine to still have a command that removes a replica from the clusterstate. The problem is that, If I removed a replica from clusterstate and then the core came up, What is the desired behavior? register the core or unload the core? bq.Then perhaps they should not be implemented and this energy is better spent working towards a fully functional Collections API. SOLR-5310 is a step towards a fully functional collections API. The use case is, A node went down and I don't need to replace it with another node because I have enough replicas. Now I need to clean up the clusterstate .Currently there is no way to achieve it bq.uh...yes it is used... I',m sorry, I meant it is not used anywhere BY THE USER My intent was not to break backcompat . But it happened because I didn't know this particular usecase.Let us see what is the best solution for this? Let us answer a few questions to ourselves * If we are designing the system today which way would you choose? a deletereplica API or a create core API to 'replace' a core * implement deletereplica API , but make the clusterstate slightly ugly for backward compatibility > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); >
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858900#comment-13858900 ] Noble Paul commented on SOLR-5580: -- bq.It's just a matter of opinion Yes, you are right. the point is , I think people don't really have to think that replicas have a name , they just need to have enough replicas for a given slice. bq.I don't believe that - it should be fine to still have a command that removes a replica from the clusterstate. The problem is that, If I removed a replica from clusterstate and then the core came up, What is the desired behavior? register the core or unload the core? bq.Then perhaps they should not be implemented and this energy is better spent working towards a fully functional Collections API. SOLR-5310 is a step towards a fully functional collections API. The use case is, A node went down and I don't need to replace it with another node because I have enough replicas. Now I need to clean up the clusterstate .Currently there is no way to achieve it bq.uh...yes it is used... I',m sorry, I meant it is not used anywhere BY THE USER My intent was not to break backcompat . But it happened because I didn't know this particular usecase.Let us see what is the best solution for this? Let us answer a few questions to ourselves * If we are designing the system today which way would you choose? a deletereplica API or a create core API to 'replace' a core * implement deletereplica API , but make the clusterstate slightly ugly for backward compatibility > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1385#comment-1385 ] Mark Miller commented on SOLR-5580: --- I also thing the approach is not correct in general - even if you didn't allow a user to specify the coreNodeName, you can't 100% safely use that information to determine if a core should exist or not. The correct approach is to finish the Collections API, which can know if a collection should exist or not. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858881#comment-13858881 ] Mark Miller commented on SOLR-5580: --- bq. Do we need a way to 'replace' a replica. It's just a matter of opinion on whether it's a hack to have to remove a replica and then add it or let something take over for it. Either method seems like it should reasonably work if you want it to. In the longer term, the Collections API should be reasonable for all of this stuff, and eventually we won't necessarily support manual core manipulation. Until we do, I think this is a good feature. bq. SOLR-5310 and SOLR-5311 both must be removed together. you can't remove one and leave the other one I don't believe that - it should be fine to still have a command that removes a replica from the clusterstate. bq. I don't see a clean way to implement SOLR-5310 and SOLR-5311 without making the clusterstate ugly Then perhaps they should not be implemented and this energy is better spent working towards a fully functional Collections API. bq. The coreNodeName is not used anywhere , so is it important (or even desirable ) to have custom coreNodeName ? uh...yes it is used... > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858877#comment-13858877 ] Noble Paul commented on SOLR-5580: -- I didn't know people were hacking the system this way. I thought nobody used it Having said that, let us see what is the solution * Do we need a way to 'replace' a replica. it is just not replacing anything and just creating a new node. It was a hack for not having an ability to delete replicas from clusterstate * I think the proper way should be to create new replicas and they can choose to clean up old ones using the API * SOLR-5310 and SOLR-5311 both must be removed together. you can't remove one and leave the other one * I don't see a clean way to implement SOLR-5310 and SOLR-5311 without making the clusterstate ugly * The coreNodeName is not used anywhere , so is it important (or even desirable ) to have custom coreNodeName ? > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858872#comment-13858872 ] Mark Miller commented on SOLR-5311: --- bq. People managing the clusterstate explicitly is not really a requirement. Dude, that means nothing in this context. It's a released, supported feature, right now it *is* a requirement. > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858870#comment-13858870 ] Noble Paul edited comment on SOLR-5311 at 12/30/13 3:30 PM: People managing the clusterstate explicitly is not really a requirement. They just need to create cores and the system should automatically assign coreNodeName . was (Author: noble.paul): People managing the clusterstate explicitly is not really a requirement. They just need to create cores and the system should automatically assign coreNodeName . > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858870#comment-13858870 ] Noble Paul commented on SOLR-5311: -- People managing the clusterstate explicitly is not really a requirement. They just need to create cores and the system should automatically assign coreNodeName . > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858868#comment-13858868 ] Mark Miller commented on SOLR-5580: --- You can currently accomplish two things with this feature: 1. coreNodeNames that suit your taste rather than the generic ones we make up. Not very important, but something users can already be doing. 2. coreNodeName is the identity in the clusterstate - so you can make a new SolrCore take over for an existing state. Like I described above. If we need to do something that conflicts with this feature, we either need to write code to let both things coexist, or we need to deprecate and wait till 5 to remove it or something. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858864#comment-13858864 ] Noble Paul commented on SOLR-5580: -- Mark, can you explain the difference between creating a core with the combination of collection/slice and collection/slice/coreNodeName ? what is the difference in the behavior ? > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858863#comment-13858863 ] Mark Miller commented on SOLR-5580: --- If you have 2 replicas and one machine blows up, you now have two replicas registered and one running. If you buy a new machine, you can now tell it to take over for the machine that blew up rather than having a replica in the state that will never come back or having to remove it manually. This is a feature that cannot simply be removed unceremoniously in a point release. User coreNodeNames is a current, supported feature... > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858860#comment-13858860 ] Mark Miller commented on SOLR-5311: --- You can't count on the coreNodeName to determine if a core was removed or not. The whole thing is much tricker than this anyway - when cores are controlled by the user, you can't yet tell what should exist or not, just what states are published. Doing something better is more difficult than what is done with this patch. > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858858#comment-13858858 ] Noble Paul commented on SOLR-5580: -- I don't understand where is coreNodeName used? If a new core is created with same collection/slice it will join that slice. the corNodeName is not used internally > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-5311) Avoid registering replicas which are removed
[ https://issues.apache.org/jira/browse/SOLR-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reopened SOLR-5311: --- > Avoid registering replicas which are removed > - > > Key: SOLR-5311 > URL: https://issues.apache.org/jira/browse/SOLR-5311 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 4.6, 5.0 > > Attachments: SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, > SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch, SOLR-5311.patch > > > If a replica is removed from the clusterstate and if it comes back up it > should not be allowed to register. > Each core ,when comes up, checks if it was already registered and if yes is > it still there. If not ,throw an error and do an unregister . If such a > request come to overseer it should ignore such a core -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858857#comment-13858857 ] Mark Miller commented on SOLR-5580: --- SOLR-5311 is the one to reopen. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858854#comment-13858854 ] Mark Miller commented on SOLR-5580: --- 1. Even if we need to eliminate this, you cannot just eliminate a large feature by introducing buggy code that doesn't work with it! 2. -1 on eliminating this! Custom coreNodeName is an explicit and important feature! This is how users can have a SolrCore take over for a replica that has gone away, or move it to a new machine. The feature you added does not make sense with the current system. I suggest trying to implement something else correctly in another issue, but as it is, it's just one big bug with the current system design. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858852#comment-13858852 ] Noble Paul commented on SOLR-5580: -- We need to eliminate this. Otherwise there is no other way to implement SLR-5311 . Why do we need this particular usecase? > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858850#comment-13858850 ] Mark Miller commented on SOLR-5580: --- If you want to try and introduce this feature go back to the original issue. But please don't break this yet again. You can't do what you did, not even close. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-5580. --- Resolution: Fixed Assignee: Mark Miller (was: Noble Paul) > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-5580: Assignee: Noble Paul (was: Mark Miller) > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Noble Paul > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858844#comment-13858844 ] Mark Miller commented on SOLR-5580: --- bq. But why would anyone create a core with explicit coreNodeName It's a supported feature and you can't just eliminate it. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Noble Paul > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858841#comment-13858841 ] Noble Paul edited comment on SOLR-5580 at 12/30/13 2:51 PM: Yes , this is expected to fail. But why would anyone create a core with explicit coreNodeName. I wanted that case to be eliminated. The idea is to only create the coreNodeName at the Overseer was (Author: noble.paul): Yes , this is expected to fail. But why would anyone create a core with explicit coreNodeName. I wanted that case to be eliminated. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5580) NPE when create a core with both explicite shard and coreNodeName
[ https://issues.apache.org/jira/browse/SOLR-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858841#comment-13858841 ] Noble Paul commented on SOLR-5580: -- Yes , this is expected to fail. But why would anyone create a core with explicit coreNodeName. I wanted that case to be eliminated. > NPE when create a core with both explicite shard and coreNodeName > -- > > Key: SOLR-5580 > URL: https://issues.apache.org/jira/browse/SOLR-5580 > Project: Solr > Issue Type: Bug >Affects Versions: 4.6 > Environment: OS:Red Hat Enterprise Linux Server release 6.4 (Santiago) > Software:solr 4.6, >jdk:OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) > OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode) >Reporter: YouPeng Yang >Assignee: Mark Miller > Labels: core > Fix For: 5.0, 4.7, 4.6.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > In class org.apache.solr.cloud.Overseer the Line 360: > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - > the slice needs to be checked null .because when create a new core with both > explicite shard and coreNodeName, the state.getSlice(collection, sliceName) > may return a null.So it needs to be checked ,or there will be an > NullpointException > - > if (sliceName !=null && collectionExists && > !"true".equals(state.getCollection(collection).getStr("autoCreated"))) { > Slice slice = state.getSlice(collection, sliceName); > if (slice != null && slice.getReplica(coreNodeName) == null) { > log.info("core_deleted . Just return"); > return state; > } > } > - -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
[ https://issues.apache.org/jira/browse/SOLR-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858814#comment-13858814 ] Uwe Schindler commented on SOLR-5591: - Hi. I am on mobile only. Please look in Changes.txt. I think it changed during this year. But the solrconfig defaults for formdata and multipart limits are different, too. But webserver is now out of game. > SolrJ should use multipart forms for Solr Cloud > --- > > Key: SOLR-5591 > URL: https://issues.apache.org/jira/browse/SOLR-5591 > Project: Solr > Issue Type: Improvement >Affects Versions: 4.5.1 >Reporter: Karl Wright > > Once SOLR-5590 is fixed, and the proper content-encoding is specified in > SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms > once more. When that is done, SolrCloud should also use multipart forms, > since otherwise the http GET/POST headers may exceed web server limits. See > CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF > project overrode SolrJ classes to make the right thing happen, but we'd like > to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
[ https://issues.apache.org/jira/browse/SOLR-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858810#comment-13858810 ] Karl Wright commented on SOLR-5591: --- Hi Uwe, Solr behavior differs when multipart post is used vs. other methods, specifically when Solr Cell is in use. If this could be resolved so that there were indeed no differences, I would be happy to use multipart only conditionally. See CONNECTORS-623. > SolrJ should use multipart forms for Solr Cloud > --- > > Key: SOLR-5591 > URL: https://issues.apache.org/jira/browse/SOLR-5591 > Project: Solr > Issue Type: Improvement >Affects Versions: 4.5.1 >Reporter: Karl Wright > > Once SOLR-5590 is fixed, and the proper content-encoding is specified in > SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms > once more. When that is done, SolrCloud should also use multipart forms, > since otherwise the http GET/POST headers may exceed web server limits. See > CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF > project overrode SolrJ classes to make the right thing happen, but we'd like > to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
[ https://issues.apache.org/jira/browse/SOLR-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858806#comment-13858806 ] Uwe Schindler commented on SOLR-5591: - In addition multipart parsing has a temporary file overhead. So I would only use it for huge POST data. But take care: it is also limited by default. > SolrJ should use multipart forms for Solr Cloud > --- > > Key: SOLR-5591 > URL: https://issues.apache.org/jira/browse/SOLR-5591 > Project: Solr > Issue Type: Improvement >Affects Versions: 4.5.1 >Reporter: Karl Wright > > Once SOLR-5590 is fixed, and the proper content-encoding is specified in > SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms > once more. When that is done, SolrCloud should also use multipart forms, > since otherwise the http GET/POST headers may exceed web server limits. See > CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF > project overrode SolrJ classes to make the right thing happen, but we'd like > to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
[ https://issues.apache.org/jira/browse/SOLR-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858805#comment-13858805 ] Karl Wright commented on SOLR-5591: --- Hi Uwe, Thanks for the clarification. When did this change? I'd like to make sure the person who filed CONNECTORS-839 is aware of it. > SolrJ should use multipart forms for Solr Cloud > --- > > Key: SOLR-5591 > URL: https://issues.apache.org/jira/browse/SOLR-5591 > Project: Solr > Issue Type: Improvement >Affects Versions: 4.5.1 >Reporter: Karl Wright > > Once SOLR-5590 is fixed, and the proper content-encoding is specified in > SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms > once more. When that is done, SolrCloud should also use multipart forms, > since otherwise the http GET/POST headers may exceed web server limits. See > CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF > project overrode SolrJ classes to make the right thing happen, but we'd like > to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
[ https://issues.apache.org/jira/browse/SOLR-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858798#comment-13858798 ] Uwe Schindler commented on SOLR-5591: - This is no longer a web server limitation. Solr completely parses the POST content on its own. The multipart or formdata limits are solely solrconfig settings. The servlet container has no limitations anymore. Solr just parses the input stream. > SolrJ should use multipart forms for Solr Cloud > --- > > Key: SOLR-5591 > URL: https://issues.apache.org/jira/browse/SOLR-5591 > Project: Solr > Issue Type: Improvement >Affects Versions: 4.5.1 >Reporter: Karl Wright > > Once SOLR-5590 is fixed, and the proper content-encoding is specified in > SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms > once more. When that is done, SolrCloud should also use multipart forms, > since otherwise the http GET/POST headers may exceed web server limits. See > CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF > project overrode SolrJ classes to make the right thing happen, but we'd like > to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-5591) SolrJ should use multipart forms for Solr Cloud
Karl Wright created SOLR-5591: - Summary: SolrJ should use multipart forms for Solr Cloud Key: SOLR-5591 URL: https://issues.apache.org/jira/browse/SOLR-5591 Project: Solr Issue Type: Improvement Affects Versions: 4.5.1 Reporter: Karl Wright Once SOLR-5590 is fixed, and the proper content-encoding is specified in SolrJ's HttpSolrServer class, SolrJ should completely support multipart forms once more. When that is done, SolrCloud should also use multipart forms, since otherwise the http GET/POST headers may exceed web server limits. See CONNECTORS-839 for a description of the issue. (Once again, the ManifoldCF project overrode SolrJ classes to make the right thing happen, but we'd like to remove our hack eventually.) -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5251) New Dictionary Implementation for Suggester consumption
[ https://issues.apache.org/jira/browse/LUCENE-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858785#comment-13858785 ] ASF subversion and git services commented on LUCENE-5251: - Commit 1554207 from [~mikemccand] in branch 'dev/branches/lucene5376' [ https://svn.apache.org/r1554207 ] LUCENE-5376, LUCENE-5251: expose DocumentDictionary (to build suggestor from stored documents) in demo server > New Dictionary Implementation for Suggester consumption > --- > > Key: LUCENE-5251 > URL: https://issues.apache.org/jira/browse/LUCENE-5251 > Project: Lucene - Core > Issue Type: New Feature > Components: core/search >Reporter: Areek Zillur > Fix For: 4.6, 5.0 > > Attachments: LUCENE-5251.patch, LUCENE-5251.patch, LUCENE-5251.patch, > LUCENE-5251.patch > > > With the vast array of new suggester, It would be nice to have a dictionary > implementation that could feed the suggesters terms, weights and (optionally) > payloads from the lucene index. > The idea of this dictionary implementation is to grab stored documents from > the index and use user-configured fields for terms, weights and payloads. > use-case: If you have a document with three fields >- product_id >- product_name >- product_popularity_score > then using this implementation would enable you to have a suggester for > product_name using the weight of product_popularity_score and return you the > payload of product_id, with which you can do further processing on (example: > construct a url etc). -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5376) Add a demo search server
[ https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858783#comment-13858783 ] ASF subversion and git services commented on LUCENE-5376: - Commit 1554207 from [~mikemccand] in branch 'dev/branches/lucene5376' [ https://svn.apache.org/r1554207 ] LUCENE-5376, LUCENE-5251: expose DocumentDictionary (to build suggestor from stored documents) in demo server > Add a demo search server > > > Key: LUCENE-5376 > URL: https://issues.apache.org/jira/browse/LUCENE-5376 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Attachments: lucene-demo-server.tgz > > > I think it'd be useful to have a "demo" search server for Lucene. > Rather than being fully featured, like Solr, it would be minimal, just > wrapping the existing Lucene modules to show how you can make use of these > features in a server setting. > The purpose is to demonstrate how one can build a minimal search server on > top of APIs like SearchManager, SearcherLifetimeManager, etc. > This is also useful for finding rough edges / issues in Lucene's APIs that > make building a server unnecessarily hard. > I don't think it should have back compatibility promises (except Lucene's > index back compatibility), so it's free to improve as Lucene's APIs change. > As a starting point, I'll post what I built for the "eating your own dog > food" search app for Lucene's & Solr's jira issues > http://jirasearch.mikemccandless.com (blog: > http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It > uses Netty to expose basic indexing & searching APIs via JSON, but it's very > rough (lots nocommits). -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-5590) SolrJ is still on httpcomponents/httpclient version 4.2.x, which has some problems
Karl Wright created SOLR-5590: - Summary: SolrJ is still on httpcomponents/httpclient version 4.2.x, which has some problems Key: SOLR-5590 URL: https://issues.apache.org/jira/browse/SOLR-5590 Project: Solr Issue Type: Improvement Affects Versions: 4.5.1 Reporter: Karl Wright SolrJ depends on HttpClient 4.2.x right now, but HttpClient 4.2.x has issues that the ManifoldCF team encountered with handling of form data encoding - issues which are addressed in HttpClient 4.3.x. We developed a local patch, but Solr will eventually need to go to the new client. (ManifoldCF would plan to follow shortly thereafter). I tried to get Oleg (PMC chair of HttpComponents) to agree to port the fixed code to the 4.2.x stream but he did not want to do that. So I believe that that avenue is closed. See CONNECTORS-623 for a detailed description of the problem. -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5589) Disabled replication in config is ignored
[ https://issues.apache.org/jira/browse/SOLR-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] alexey updated SOLR-5589: - Attachment: SOLR-5589.patch > Disabled replication in config is ignored > - > > Key: SOLR-5589 > URL: https://issues.apache.org/jira/browse/SOLR-5589 > Project: Solr > Issue Type: Bug > Components: replication (java) >Affects Versions: 4.5 >Reporter: alexey > Fix For: 4.6 > > Attachments: SOLR-5589.patch > > > When replication on master node is explicitly disabled in config, it is still > enabled after start. This is because when both master and slave > configurations are written with enabled=false, replication handler considers > this node is a master and enables it. With proposed patch handler will > consider this as master node but will disable replication on startup if it is > disabled in config (equivalent to disablereplication command). -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-5589) Disabled replication in config is ignored
alexey created SOLR-5589: Summary: Disabled replication in config is ignored Key: SOLR-5589 URL: https://issues.apache.org/jira/browse/SOLR-5589 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 4.5 Reporter: alexey Fix For: 4.6 When replication on master node is explicitly disabled in config, it is still enabled after start. This is because when both master and slave configurations are written with enabled=false, replication handler considers this node is a master and enables it. With proposed patch handler will consider this as master node but will disable replication on startup if it is disabled in config (equivalent to disablereplication command). -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5476) Overseer Role for nodes
[ https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-5476: - Description: In a very large cluster the Overseer is likely to be overloaded.If the same node is a serving a few other shards it can lead to OverSeer getting slowed down due to GC pauses , or simply too much of work . If the cluster is really large , it is possible to dedicate high end h/w for OverSeers It works as a new collection admin command command=addrole&role=overseer&node=192.168.1.5:8983_solr This results in the creation of a entry in the /roles.json in ZK which would look like the following {code:javascript} { "overseer" : ["192.168.1.5:8983_solr"] } {code} If a node is designated for overseer it gets preference over others when overseer election takes place. If no designated servers are available another random node would become the Overseer. Later on, if one of the designated nodes are brought up ,it would take over the Overseer role from the current Overseer to become the Overseer of the system was: In a very large cluster the Overseer is likely to be overloaded.If the same node is a serving a few other shards it can lead to OverSeer getting slowed down due to GC pauses , or simply too much of work . If the cluster is really large , it is possible to dedicate high end h/w for OverSeers It works as a new collection admin command command=assignRole&whitelist=overseer&node=192.168.1.5:8983_solr&node=192.168.1.6:8983_solr This results in the creation of a entry in the /roles.json in ZK which would look like the following {code:javascript} { "overseer" : ["192.168.1.5:8983_solr", "192.168.1.6:8983_solr"] } {code} If a node is whitelisted for overseer it gets preference over others when overseer election takes place. If no whitelisted servers are available another random node would become the Overseer. Later on, if one of the whitelisted nodes are brought up ,it would take over the Overseer role from the current Overseer to become the Overseer of the system > Overseer Role for nodes > --- > > Key: SOLR-5476 > URL: https://issues.apache.org/jira/browse/SOLR-5476 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-5476.patch > > > In a very large cluster the Overseer is likely to be overloaded.If the same > node is a serving a few other shards it can lead to OverSeer getting slowed > down due to GC pauses , or simply too much of work . If the cluster is > really large , it is possible to dedicate high end h/w for OverSeers > It works as a new collection admin command > command=addrole&role=overseer&node=192.168.1.5:8983_solr > This results in the creation of a entry in the /roles.json in ZK which would > look like the following > {code:javascript} > { > "overseer" : ["192.168.1.5:8983_solr"] > } > {code} > If a node is designated for overseer it gets preference over others when > overseer election takes place. If no designated servers are available another > random node would become the Overseer. > Later on, if one of the designated nodes are brought up ,it would take over > the Overseer role from the current Overseer to become the Overseer of the > system -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5476) Overseer Role for nodes
[ https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-5476: - Attachment: SOLR-5476.patch New command implemented addrole and removerole . The only supported role now is overseer > Overseer Role for nodes > --- > > Key: SOLR-5476 > URL: https://issues.apache.org/jira/browse/SOLR-5476 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-5476.patch > > > In a very large cluster the Overseer is likely to be overloaded.If the same > node is a serving a few other shards it can lead to OverSeer getting slowed > down due to GC pauses , or simply too much of work . If the cluster is > really large , it is possible to dedicate high end h/w for OverSeers > It works as a new collection admin command > command=assignRole&whitelist=overseer&node=192.168.1.5:8983_solr&node=192.168.1.6:8983_solr > This results in the creation of a entry in the /roles.json in ZK which would > look like the following > {code:javascript} > { > "overseer" : ["192.168.1.5:8983_solr", "192.168.1.6:8983_solr"] > } > {code} > If a node is whitelisted for overseer it gets preference over others when > overseer election takes place. If no whitelisted servers are available > another random node would become the Overseer. > Later on, if one of the whitelisted nodes are brought up ,it would take over > the Overseer role from the current Overseer to become the Overseer of the > system -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5584) Update to Guava 15.0
[ https://issues.apache.org/jira/browse/SOLR-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858699#comment-13858699 ] wolfgang hoschek commented on SOLR-5584: What exactly is failing for you? morphlines was designed to run fine with any guava version >= 11.0.2. At least it did last I checked... > Update to Guava 15.0 > > > Key: SOLR-5584 > URL: https://issues.apache.org/jira/browse/SOLR-5584 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Minor > Fix For: 5.0, 4.7 > > -- This message was sent by Atlassian JIRA (v6.1.5#6160) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1065: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1065/ 1 tests failed. REGRESSION: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Mon Dec 30 09:48:41 CAT 2013 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Mon Dec 30 09:48:41 CAT 2013 at __randomizedtesting.SeedInfo.seed([5031C97C7DD16415:8B9AC9BA78F90DA6]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1515) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:819) Build Log: [...truncated 52909 lines...] BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:476: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:176: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77: Java returned: 1 Total time: 137 minutes 14 seconds Build step 'Invoke Ant' marked build as failure Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org