Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
Flavio Percoco wrote: During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. +100 ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? +1 ## Cores are *NOT* special At some point, for some reason that is unknown to me, this message changed and the feeling of core's being some kind of superheros became a thing. It's gotten far enough to the point that I've came to know that some projects even have private (flagged with +s), password protected, irc channels for core reviewers. If those exist, I think they should die in a fire. I'm fine with the TC passing a resolution so that such channels are opened. Private channels where real decisions are made are pretty contrary to our ideal of open development. This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other non-core members in those discussions. Remember that the core flag is granted because of the reviews that person has provided and because that individual *WANTS* to be part of it. It's not a prize for people. In fact, I consider core reviewers to be volunteers and their job is infinitely thanked. +1000 Core reviewing has always be designed to be a duty, not a badge. There has been a trends toward making it a badge, with some companies giving bonuses to core reviewers, and HP making +2 pins and throwing +2 parties. I think that's a significant mistake and complained about it, but then my influence only goes that far. The problem with special rights (like +2) is that if you don't actively resist it, they naturally turn into an aristocracy (especially when only existing cores vote on new cores). That aristocracy then usually turns into a clique which is excluding new blood and new opinions, and then that project slowly dies. Thanks Flavio for this timely reminder. -- Thierry Carrez (ttx) signature.asc Description: OpenPGP digital signature __ OpenStack Development
Re: [openstack-dev] [blazar] currectly status
Hi, I'm agree with Sylvain. Also, if you want to contribute to Blazar ping me in IRC, I can tell you where you can start with it. Nikolay Starodubtsev Software Engineer Mirantis Inc. Skype: dark_harlequine1 2015-02-11 11:18 GMT+03:00 Sylvain Bauza sba...@redhat.com: Le 11/02/2015 04:24, Jin, Yuntong a écrit : Hello, May I ask the currently status of blazar project, it’s been very quiet there for past half year, part of reason could be related to Gantt project? The way I see this project is very usefully for NFV use case, and I really like to know the status of it, maybe also Gantt project. Thanks Hi, Thanks for your interest on Blazar. The existing core team has been reallocated on various other projects so we ended doing regular updates to the repository since around 6 months. That said, as it is an open-source project, anybody can contribute and I would be glad to review some changes, provided they are not time-consuming. Last discussion with TC members in Atlanta (for the Juno summit) showed that there are benefits to have a reservation system in OpenStack, but the thought was that it would probably be something related to the Compute program, ie. something that Nova could leverage. As the current Nova scheduler is about to be spined off in a separate project called Gantt, I'm IMHO thinking (and that's my sole opinion) that Blazar could maybe merge with Gantt so that the existing backend would allow new APIs for Gantt by asking to select a destination later in time than now. That said, Gantt is far from being a separate repository now, as we're struggling to reduce the technical debt on Nova for splitting out the scheduler so I wouldn't expect any immediate benefit for Gantt nor Blazar now. As many people are looking around Blazar and Gantt, I think it would be interesting to setup a BoF session during the Vancouver Summit about reservations and SLA in OpenStack so that we could see how we could move on. -Sylvain __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [CINDER] discussion about Volume Delete API
In http://developer.openstack.org/api-ref-blockstorage-v2.html, delete volume preconditions: Preconditions · Volume status must be available, in-use, error, or error_restoring. I think we may change to following statements: Preconditions · Volume status must be creating, available, error, or error_restoring. (add creating and remove in-use) 1)If LUN is attached (in-use), we cannot delete LUN. 2)Event LUN is creating (hung in create), we may delete LUN. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.
You could rise an exception if ports are specified for all nics. [1] I'm not sure that logging of this case is helpful, because only admins can access to logs. Probably the better way to warn a user is to do it at client side by nova cli (i.e. no any modification of nova server is needed). [1] It returns us back to the original Matt's question. I suppose that most people, which tried to specify security groups with ports, already found that it doesn't do work properly. And now they don't use them together. Other part, which didn't found this, have the error in their scripts (because they start instances with no expected groups). So rising an error in this case could be useful. In my turn, i used these parameters together ( https://github.com/stackforge/ec2-api/blob/master/ec2api/api/instance.py#L770) to do a workaround of a strange bug ( https://bugs.launchpad.net/nova/+bug/1384347), which is not actual since Juno. I believe OpenStack could not support compatibility for such workarounds. Finlally, the only my concern is the case with two nics, mentioned at the beginning. On Wed, Feb 11, 2015 at 10:39 AM, Oleg Bondarev obonda...@mirantis.com wrote: On Tue, Feb 10, 2015 at 5:26 PM, Feodor Tersin fter...@cloudscaling.com wrote: I definitely don't expect any change of the existing port in the case with two nics. However in the case of single nic a question like 'what is impact of security-groups parameter' arises. Also a similar question arises out of '--nic port-id=xxx,v4-fixed-ip=yyy' combination. Moreover, if we assume that, for example, security-groups parameter affects the specified port, the next question is 'what is the result group set'. Does it replace groups of the port, or just update them? Thus i agree with you, that this part of nova API is not clear now. But the case with two nics make sense, works now, and can be used by someone. Do you really want to break it? I don't want to break anything :) I guess the only option then is to just log a warning that security groups are ignored in case port_id is provided on boot - but this still leaves a chance of broken user expectations. On Tue, Feb 10, 2015 at 10:29 AM, Oleg Bondarev obonda...@mirantis.com wrote: On Mon, Feb 9, 2015 at 8:50 PM, Feodor Tersin fter...@cloudscaling.com wrote: nova boot ... --nic port-id=xxx --nic net-id=yyy this case is valid, right? I.e. i want to boot instance with two ports. The first port is specified, but the second one is created at network mapping stage. If i specify a security group as well, it will be used for the second port (if not - default group will): nova boot ... --nic port-id=xxx --nic net-id=yyy --security-groups sg-1 Thus a port and a security group can be specified together. The question here is what do you expect for the existing port - it's security groups updated or not? Will it be ok to silently (or with warning in logs) ignore security groups for it? If it's ok then is it ok to do the same for: nova boot ... --nic port-id=xxx --security-groups sg-1 where the intention is clear enough? On Mon, Feb 9, 2015 at 7:14 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 9/26/2014 3:19 AM, Christopher Yeoh wrote: On Fri, 26 Sep 2014 11:25:49 +0400 Oleg Bondarev obonda...@mirantis.com wrote: On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote: I think the expectation is that if a user is already interaction with Neutron to create ports then they should do the security group assignment in Neutron as well. Agree. However what do you think a user expects when he/she boots a vm (no matter providing port_id or just net_id) and specifies security_groups? I think the expectation should be that instance will become a member of the specified groups. Ignoring security_groups parameter in case port is provided (as it is now) seems completely unfair to me. One option would be to return a 400 if both port id and security_groups is supplied. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Coming back to this, we now have a change from Oleg [1] after an initial attempt that was reverted because it would break server creates if you specified a port (because the original change would blow up when the compute API added the 'default' security group to the request'). The new change doesn't add the 'default' security group to the request so if you specify a security group and port on the request, you'll now get a 400 error response. Does this break API compatibility? It seems this falls under the first bullet here [2], A change such that a request which was successful before now results in an error response (unless the success reported previously was hiding an existing error condition). Does that caveat in parenthesis make this OK? It seems like we've had a lot
[openstack-dev] [all][tc] Lets keep our community open, lets fight for it
Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? ## Cores are *NOT* special At some point, for some reason that is unknown to me, this message changed and the feeling of core's being some kind of superheros became a thing. It's gotten far enough to the point that I've came to know that some projects even have private (flagged with +s), password protected, irc channels for core reviewers. This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other non-core members in those discussions. Remember that the core flag is granted because of the reviews that person has provided and because that individual *WANTS* to be part of it. It's not a prize for people. In fact, I consider core reviewers to be volunteers and their job is infinitely thanked. Since, All generalizations are false, including this one. - Mark Twain, I'm pretty sure there are folks that disagree with the above. If you do, I care about your thoughts. This is worth discussing and fighting for. All the above being said, I'd like to thank everyone who fights for the openness of our community and encourage everyone to make that a must have thing in each sub-community. You don't need to be core-reviewer or PTL to do so. Speak up and help keeping the community as open as possible. Cheers, Flavio -- @flaper87 Flavio Percoco pgpTvB183mNNf.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Cross-Repo Dependencies in Zuul
OH MY GOD! I'll throw a party! Thanks to everyone involved in making this happen. Flavio On 10/02/15 14:26 -0800, James E. Blair wrote: Hi, We have added support for cross-repo dependencies (CRD) in Zuul. The important bits: * To use them, include Depends-On: gerrit-change-id in the footer of your commit message. Use the full Change-ID ('I' + 40 characters). * These are one-way dependencies only -- do not create a cycle. * This is what all the grey dots and lines are in the check pipeline. Cross-Repo Dependencies Explained = There are two behaviors that might go by the name cross-repo dependencies. We call them one-way and multi-way. Multi-way CRD allow for bidirectional links between changes. For instance, A depends on B and B depends on A. The theory there is that both would be tested together and merged as a unit. This is _not_ what we have implemented in Zuul. Discussions over the past two years have revealed that this type of behavior could cause problems for continuous deploments as it means that two components must be upgraded simultaneously. Not supporting this behavior is a choice we have made. One-way CRD behaves like a directed acyclic graph (DAG), like git itself, to indicate a one-way dependency relationship between changes in different git repos. Change A may depend on B, but B may not depend on A. This is what we have implemented in Zuul. Gate Pipeline = When Zuul sees CRD changes, it serializes them in the usual manner when enqueuing them into a pipeline. This means that if change A depends on B, then when they are added to the gate pipeline, B will appear first and A will follow. If tests for B fail, both B and A will be removed from the pipeline, and it will not be possible for A to merge until B does. Note that if changes with CRD do not share a change queue (such as the integrated gate then Zuul is unable to enqueue them together, and the first will be required to merge before the second is enqueued. Check Pipeline == When changes are enqueued into the check pipeline, all of the related dependencies (both normal git-dependencies that come from parent commits as well as CRD changes) appear in a dependency graph, as in gate. This means that even in the check pipeline, your change will be tested with its dependency. So changes that were previously unable to be fully tested until a related change landed in a different repo may now be tested together from the start. All of the changes are still independent (so you will note that the whole pipeline does not share a graph as in gate), but for each change tested, all of its dependencies are visually connected to it, and they are used to construct the git references that Zuul uses when testing. When looking at this graph on the status page, you will note that the dependencies show up as grey dots, while the actual change tested shows up as red or green. This is to indicate that the grey changes are only there to establish dependencies. Even if one of the dependencies is also being tested, it will show up as a grey dot when used as a dependency, but separately and additionally will appear as its own red or green dot for its test. (If you don't see grey dots on the status page, reload the page to get the latest version.) Multiple Changes A Gerrit change ID may refer to multiple changes (on multiple branches of the same project, or even multiple projects). In these cases, Zuul will treat all of the changes with that change ID as dependencies. So if you say that a tempest change Depends-On a change ID that has changes in nova master and nova stable/juno, then when testing the tempest change, both nova changes will be applied, and when deciding whether the tempest change can merge, both changes must merge ahead of it. A change may depend on more than one Gerrit change ID as well. So it is possible for a change in tempest to depend on a change in devstack and a change in nova. Simply add more Depends-On: lines to the footer. Cycles == If a cycle is created by use of CRD, Zuul will abort its work very early. There will be no message in Gerrit and no changes that are part of the cycle will be enqueued into any pipeline. This is to protect Zuul from infinite loops. I hope that we can improve this to at least leave a message in Gerrit in the future. But in the meantime, please be cognizant of this and do not create dependency cycles with Depends-On lines. Examples The following two infra changes have been tested together because of the Depends-On: line in the commit message of the first: https://review.openstack.org/#/c/152508/ https://review.openstack.org/#/c/152504/ In fact, you can see earlier test results failing until it was rechecked after CRD went into production (around 2015-02-10 15:20 UTC). This devstack change depended on a grenade change: https://review.openstack.org/#/c/154575/
Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation
Pasquale Porreca, The flexibility/ freedom to create meta data tags for images and Nova flavor extra specs can be confusing. Even allows one to make typographical errors that may be hard to detect. As Daniel mentions, some tags have a definite meaning/semantics, others can be totally random. Tags with semantic significance will typically be handled by special purpose filters, look in Nova/filters directory, /opt/stack/nova/nova/scheduler/filters The filters documentation in Nova admin guide may help too. All the rest are just matched as strings. Hope that helps. Regards Malini -Original Message- From: Kashyap Chamarthy [mailto:kcham...@redhat.com] Sent: Wednesday, February 11, 2015 2:14 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation On Wed, Feb 11, 2015 at 10:23:54AM +0100, Pasquale Porreca wrote: Hello I am working on a little patch that introduce a new flavor extra-spec and image metadata key-value pair https://review.openstack.org/#/c/153607/ I am wondering how an openstack admin can be aware that a specific value of a flavor extra-spec or image metadata provides a feature he may desire, in other words is there a place where the flavor extra-specs and/or image metadata key-value pairs are documented? Unfortunately, there's none as of now. I found this the hard way that you cannot trivially find all possible 'extra_spec' key values that can be set by `nova flavor-key` I did gross things like this: $ grep hw\: nova/virt/hardware.py nova/tests/unit/virt/test_hardware.py | sort | uniq And, obviously the above will only find you 'hw' properties. Daniel Berrangé once suggested that image properties and flavor extra specs need to be 'objectified' to alleviate this. I found plenty of documentation on how to list, create, delete, etc. flavor extra-spec and image metadata, but the only place where I found a list (is that complete?) of the accepted (i.e. that trigger specific feature in nova) key-value pairs is in horizon dashboard, when logged with admin credential. I am a bit confused on how someone working to add a new key/value pair should proceed. -- /kashyap __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team
+1 Well done! On Tue, Feb 10, 2015 at 09:51:14AM -0800, Morgan Fainberg wrote: Hi everyone! I wanted to propose Marek Denis (marekd on IRC) as a new member of the Keystone Core team. Marek has been instrumental in the implementation of Federated Identity. His work on Keystone and first hand knowledge of the issues with extremely large OpenStack deployments has been a significant asset to the development team. Not only is Marek a strong developer working on key features being introduced to Keystone but has continued to set a high bar for any code being introduced / proposed against Keystone. I know that the entire team really values Marek’s opinion on what is going in to Keystone. Please respond with a +1 or -1 for adding Marek to the Keystone core team. This poll will remain open until Feb 13. -- Morgan Fainberg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eng. Marco Fargetta, PhD Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy EMail: marco.farge...@ct.infn.it smime.p7s Description: S/MIME cryptographic signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team
+1 On 10 Feb 2015, at 18:04, Dolph Mathews dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote: +1 On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: Hi everyone! I wanted to propose Marek Denis (marekd on IRC) as a new member of the Keystone Core team. Marek has been instrumental in the implementation of Federated Identity. His work on Keystone and first hand knowledge of the issues with extremely large OpenStack deployments has been a significant asset to the development team. Not only is Marek a strong developer working on key features being introduced to Keystone but has continued to set a high bar for any code being introduced / proposed against Keystone. I know that the entire team really values Marek’s opinion on what is going in to Keystone. Please respond with a +1 or -1 for adding Marek to the Keystone core team. This poll will remain open until Feb 13. -- Morgan Fainberg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [blazar] currectly status
Le 11/02/2015 04:24, Jin, Yuntong a écrit : Hello, May I ask the currently status of blazar project, it’s been very quiet there for past half year, part of reason could be related to Gantt project? The way I see this project is very usefully for NFV use case, and I really like to know the status of it, maybe also Gantt project. Thanks Hi, Thanks for your interest on Blazar. The existing core team has been reallocated on various other projects so we ended doing regular updates to the repository since around 6 months. That said, as it is an open-source project, anybody can contribute and I would be glad to review some changes, provided they are not time-consuming. Last discussion with TC members in Atlanta (for the Juno summit) showed that there are benefits to have a reservation system in OpenStack, but the thought was that it would probably be something related to the Compute program, ie. something that Nova could leverage. As the current Nova scheduler is about to be spined off in a separate project called Gantt, I'm IMHO thinking (and that's my sole opinion) that Blazar could maybe merge with Gantt so that the existing backend would allow new APIs for Gantt by asking to select a destination later in time than now. That said, Gantt is far from being a separate repository now, as we're struggling to reduce the technical debt on Nova for splitting out the scheduler so I wouldn't expect any immediate benefit for Gantt nor Blazar now. As many people are looking around Blazar and Gantt, I think it would be interesting to setup a BoF session during the Vancouver Summit about reservations and SLA in OpenStack so that we could see how we could move on. -Sylvain __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [CINDER] discussion about Volume Delete API
Depending on what progress the backend has made during the create, this can race witht the create such that you can end up: a) No volume on the backend but a volume in cinder (this is ok) b) With no volume on the backend but the volume manager putting the volume in state available (delete flag would still be set so this is ok) c) A volume on the backend but no volume in cinder This third case is not ok, and probably leaves open feasible DoS attachs via fillling up backends with orphan volumes without taking up cinder quota (and without appearing in cinder at all, so being quite confusing for the admin) On 11 February 2015 at 10:23, Guo, Ruijing ruijing@intel.com wrote: In http://developer.openstack.org/api-ref-blockstorage-v2.html, delete volume preconditions: Preconditions · Volume status must be available, in-use, error, or error_restoring. I think we may change to following statements: Preconditions · Volume status must be creating, available, error, or error_restoring. (add creating and remove in-use) 1)If LUN is attached (in-use), we cannot delete LUN. 2)Event LUN is creating (hung in create), we may delete LUN. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Flavor extra-spec and image metadata documentation
Hello I am working on a little patch that introduce a new flavor extra-spec and image metadata key-value pair https://review.openstack.org/#/c/153607/ I am wondering how an openstack admin can be aware that a specific value of a flavor extra-spec or image metadata provides a feature he may desire, in other words is there a place where the flavor extra-specs and/or image metadata key-value pairs are documented? I found plenty of documentation on how to list, create, delete, etc. flavor extra-spec and image metadata, but the only place where I found a list (is that complete?) of the accepted (i.e. that trigger specific feature in nova) key-value pairs is in horizon dashboard, when logged with admin credential. I am a bit confused on how someone working to add a new key/value pair should proceed. -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On Wed, Feb 11, 2015 at 10:55:18AM +0100, Flavio Percoco wrote: Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. With the timezones of our worldwide contributors it is pretty much guaranteed that any realtime phone call will have excluded a part of our community. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. I can see that if people have reached an impass in discussions via email or irc, it is sometimes helpful to have a call to break a roadblock. I absolutely agree though that the results of any such calls should not be presented as a final decision. At the very least it is neccessary to describe the rationale for the POV obtained as a result of the call, and give the broader community the opportunity to put forward counterpoints if required. We should certainly not just say 'its good to go' and +A sommething based on a private call. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. Again, timezones. It is a physical impossibility for most people to be on IRC for more than 8 hours a day, so that's only 1/3 of the day that any signle person will likely be on IRC. And no, expecting people to have a permanently connected IRC proxy and then read the other 16 hours of logs each morning is not a solution. Personally I've stopped joining IRC most the time regardless, because I feel I am far more productive when I'm not being interrupted with IRC pings every 20 minutes. There should be few things so urgent that they can't be dealt with over email. Again because of our timezone differences we should be wary of making important decisions in a rush - anything remotely non-trivial should have at least a 24 hour window to allow people on all timezones a chance to see the point and join in discussion. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? There are alot of IRC meetings that take place in the project: https://wiki.openstack.org/wiki/Meetings and alot of decisions get made in these meetings. Very rarely do the decisions ever get disseminated to the mailing lists. We seem to rely on the fact that we have IRC logs of the meetings as a way to communicate what took place. If you have ever tried to regularly read through IRC logs of meetings that last an hour or more, it should be clear that this is an awful way to communicate info with people who weren't there. I think communication
Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation
Thank you for all answers. I know that meta data tags are free to use with any key/value, still there are some specific values that triggers pieces of code in nova (or maybe even in other components). In particular I am working on one of these key/value, in my case it should enable the bios rebootTimeout=value/ parameter in libvirt. Anyway even if my code is merged, no one will know about it (except myself and the reviewers) if it is not documented somewhere. A DocImpact flag was added to the commit message, but I still don't know how to proper document it. I may create/update existing wiki pages, but I would have preferred an official documentation: I was not able to find the wiki page proposed by Daniel Berrange, even if I had been searching exactly to something similar :( It is very good that there is a work to objectify image meta, anyway is there any recommendation how to document it in the meanwhile? I would also know if there is any naming convention for image meta and flavor extra-spec keys: in my case I used hw_reboot_timeoutand hw:reboot_timeout respectively, but it is more a bios than hardware feature and they are handled in nova/virt/libvirt/driver.py rather than nova/virt/hardware.py so maybe the name choice was not so good. On 02/11/15 11:25, Bhandaru, Malini K wrote: Pasquale Porreca, The flexibility/ freedom to create meta data tags for images and Nova flavor extra specs can be confusing. Even allows one to make typographical errors that may be hard to detect. As Daniel mentions, some tags have a definite meaning/semantics, others can be totally random. Tags with semantic significance will typically be handled by special purpose filters, look in Nova/filters directory, /opt/stack/nova/nova/scheduler/filters The filters documentation in Nova admin guide may help too. All the rest are just matched as strings. Hope that helps. Regards Malini -Original Message- From: Kashyap Chamarthy [mailto:kcham...@redhat.com] Sent: Wednesday, February 11, 2015 2:14 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation On Wed, Feb 11, 2015 at 10:23:54AM +0100, Pasquale Porreca wrote: Hello I am working on a little patch that introduce a new flavor extra-spec and image metadata key-value pair https://review.openstack.org/#/c/153607/ I am wondering how an openstack admin can be aware that a specific value of a flavor extra-spec or image metadata provides a feature he may desire, in other words is there a place where the flavor extra-specs and/or image metadata key-value pairs are documented? Unfortunately, there's none as of now. I found this the hard way that you cannot trivially find all possible 'extra_spec' key values that can be set by `nova flavor-key` I did gross things like this: $ grep hw\: nova/virt/hardware.py nova/tests/unit/virt/test_hardware.py | sort | uniq And, obviously the above will only find you 'hw' properties. Daniel Berrangé once suggested that image properties and flavor extra specs need to be 'objectified' to alleviate this. I found plenty of documentation on how to list, create, delete, etc. flavor extra-spec and image metadata, but the only place where I found a list (is that complete?) of the accepted (i.e. that trigger specific feature in nova) key-value pairs is in horizon dashboard, when logged with admin credential. I am a bit confused on how someone working to add a new key/value pair should proceed. -- /kashyap __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Question about EC2 Tempest tests
Hello everyone, I have some question about EC2 Tempest tests. When I run these tests, I regularly have the same error for all tests: EC2ResponseError: EC2ResponseError: 400 Bad Request ?xml version=1.0? ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not provided/Message/Error/Errors My environment is OpenStack (the Juno release) deployed by Fuel 6.0. Tempest is from master branch. I found that the issue was related to boto (Tempest installs it into virtual environment as a dependency). The last available release of boto is 2.36.0. When this version of boto is installed, EC2 tests don't work. But if I install boto 2.34.0 instead of 2.36.0, all EC2 tests will have success. Any thoughts? Regards, Yaroslav Lobankov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera
- Original Message - From: Jay Pipes jaypi...@gmail.com To: Attila Fazekas afaze...@redhat.com Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Pavel Kholkin pkhol...@mirantis.com Sent: Tuesday, February 10, 2015 7:32:11 PM Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera On 02/10/2015 06:28 AM, Attila Fazekas wrote: - Original Message - From: Jay Pipes jaypi...@gmail.com To: Attila Fazekas afaze...@redhat.com, OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Cc: Pavel Kholkin pkhol...@mirantis.com Sent: Monday, February 9, 2015 7:15:10 PM Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera On 02/09/2015 01:02 PM, Attila Fazekas wrote: I do not see why not to use `FOR UPDATE` even with multi-writer or Is the retry/swap way really solves anything here. snip Am I missed something ? Yes. Galera does not replicate the (internal to InnnoDB) row-level locks that are needed to support SELECT FOR UPDATE statements across multiple cluster nodes. Galere does not replicates the row-level locks created by UPDATE/INSERT ... So what to do with the UPDATE? No, Galera replicates the write sets (binary log segments) for UPDATE/INSERT/DELETE statements -- the things that actually change/add/remove records in DB tables. No locks are replicated, ever. Galera does not do any replication at UPDATE/INSERT/DELETE time. $ mysql use test; CREATE TABLE test (id integer PRIMARY KEY AUTO_INCREMENT, data CHAR(64)); $(echo 'use test; BEGIN;'; while true ; do echo 'INSERT INTO test(data) VALUES (test);'; done ) | mysql The writer1 is busy, the other nodes did not noticed anything about the above pending transaction, for them this transaction does not exists as long as you do not call a COMMIT. Any kind of DML/DQL you issue without a COMMIT does not happened in the other nodes perspective. Replication happens at COMMIT time if the `write sets` is not empty. When a transaction wins a voting, the other nodes rollbacks all transaction which had a local conflicting row lock. Why should I handle the FOR UPDATE differently? Because SELECT FOR UPDATE doesn't change any rows, and therefore does not trigger any replication event in Galera. What matters is the full transaction changed any row at COMMIT time or not. The DMLs them-self does not starts a replication as `SELECT FOR UPDATE` does not. See here: http://www.percona.com/blog/2014/09/11/openstack-users-shed-light-on-percona-xtradb-cluster-deadlock-issues/ -jay https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
-Original Message- From: Flavio Percoco [mailto:fla...@redhat.com] Sent: Wednesday, February 11, 2015 9:55 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. ++ Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. And when they do come and ask for clarification do not just state that this was discussed and agreed already. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? This is tough call ... ~ real time communication is just so much more efficient. You can get things done in minutes that would take hours days to deal with over e-mail. It also does not help that the -dev mailing list is really crowded, the tags are not consistent (sorry for finger pointing but oslo seems to be specially inconsistent with some tagging [oslo] some tagging [oslo.something] etc. Please keep that [oslo] there ;D ). I would not discourage people to use irc or other communication means, just being prepared to answer those questions again. ## Cores are *NOT* special At some point, for some reason that is unknown to me, this message changed and the feeling of core's being some kind of superheros became a thing. It's gotten far enough to the point that I've came to know that some projects even have private (flagged with +s), password protected, irc channels for core reviewers. This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. Here I do disagree. There is stuff called private bugs for security etc. that _should_ kept private. Again speeds up progress hugely when the discussion does not need to happen in Launchpad and it keeps the bug itself cleaner as well. I do agree that there should not be secret society making common decisions behind closed doors, but there is reasons to keep some details initially between closed group only. And most commonly that closed group seems to be cores. If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other
Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation
Thank you very much for the clarification :) On 02/11/15 12:15, Daniel P. Berrange wrote: On Wed, Feb 11, 2015 at 12:03:58PM +0100, Pasquale Porreca wrote: Thank you for all answers. I know that meta data tags are free to use with any key/value, still there are some specific values that triggers pieces of code in nova (or maybe even in other components). In particular I am working on one of these key/value, in my case it should enable the bios rebootTimeout=value/ parameter in libvirt. Anyway even if my code is merged, no one will know about it (except myself and the reviewers) if it is not documented somewhere. A DocImpact flag was added to the commit message, but I still don't know how to proper document it. I may create/update existing wiki pages, but I would have preferred an official documentation: I was not able to find the wiki page proposed by Daniel Berrange, even if I had been searching exactly to something similar :( It is very good that there is a work to objectify image meta, anyway is there any recommendation how to document it in the meanwhile? For people submitting patches to nova the expectation is simply that they add DocImpact and have a commit message that describes the usage of the new property. The docs team work from this data. There's no current formal docs that I'd expect you to be editing/updating. I would also know if there is any naming convention for image meta and flavor extra-spec keys: in my case I used hw_reboot_timeoutand hw:reboot_timeout respectively, but it is more a bios than hardware feature and they are handled in nova/virt/libvirt/driver.py rather than nova/virt/hardware.py so maybe the name choice was not so good. In terms of image metadata, broadly speaking, we're aiming to standardize on 3 name prefixes in Nova 'hw' - stuff that affects guest hardware configuration (this includes the BIOS settings, since that's hardware firmware) 'os' - stuff that affects the guest operating system setup 'img' - stuff Nova uses related to managing images So, your choice of 'hw_reboot_timeout' for image and 'hw:reboot_timeout' for the flavour is correct. Regards, Daniel -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA] Question about EC2 Tempest tests
Yaroslav, The bug: https://bugs.launchpad.net/nova/+bug/1410622 And the review: https://review.openstack.org/#/c/152112/ It's recently fixed. Best regards, Alex Levine On 2/11/15 2:23 PM, Yaroslav Lobankov wrote: Hello everyone, I have some question about EC2 Tempest tests. When I run these tests, I regularly have the same error for all tests: EC2ResponseError: EC2ResponseError: 400 Bad Request ?xml version=1.0? ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not provided/Message/Error/Errors My environment is OpenStack (the Juno release) deployed by Fuel 6.0. Tempest is from master branch. I found that the issue was related to boto (Tempest installs it into virtual environment as a dependency). The last available release of boto is 2.36.0. When this version of boto is installed, EC2 tests don't work. But if I install boto 2.34.0 instead of 2.36.0, all EC2 tests will have success. Any thoughts? Regards, Yaroslav Lobankov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On Wed, 11 Feb 2015, Flavio Percoco wrote: During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Thanks for writing this. I agree with pretty much everything you say, especially the focus on the mailing list being only truly available and persistent medium we have for engaging everyone. Yes it is noisy and takes work, but it is an important part of the work. I'm not certain, but I have an intuition that many of the suboptimal and moving-in-the-direction-of-closed behaviors that you're describing are the result of people trying to cope with having too much to do with insufficient tools. Technology projects often sacrifice the management of information in favor of what's believed to be the core task (making stuff?) when there are insufficient resources. This is unfortunate because the effective sharing and management of information is the fuel that drives, optimizes and corrects the entire process and thus leads to more effective making of stuff. This thread and many of the threads going around lately speak a lot about people not being able to participate in a way that lets them generate the most quality -- either because there's insufficient time and energy to move the mountain or because each move they make opens up another rabbit hole. As many have said this is not sustainable. Many of the proposed strategies or short term tactics involve trying to hack the system so that work that is perceived to be extraneous is removed or made secondary. This won't fix it. I think it is time we recognize and act on the fact that the corporate landlords that pay many of us to farm on this land need to provide more resources. This will help to ensure the health of semi-artifical opensource ecology that is OpenStack. At the moment many things are packed tight with very little room to breathe. We need some air. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
+ Inf for writing this Flavio! Only some observations below. On 02/11/2015 10:55 AM, Flavio Percoco wrote: Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? I think the above 2 are somewhat intertwined with another trend in the community I've personally noticed towards the end of the Juno cycle, that I also strongly believe needs to DIAFF. An idea that it is possible to manage and open source community using similar methods that are commonly used for managing subordinates in a corporate hierarchy. There are other (somewhat less) horrible examples around, and they all came about as a (IMHO knee jerk) response to explosive growth, and they all need to stop. I urge people who are seen as leaders in their respective projects to stop and think the next time they want to propose a policy change or a process - ask yourself Is there an OSS project that does something similar successfully, or have I seen this from our old PM? and then not propose it if the answer is clearly that this will help the distributed workflow of an OSS community. On 02/11/2015 11:29 AM, Thierry Carrez wrote: This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other non-core members in those discussions. Remember that the core flag is granted because of the reviews that person has provided and because that individual *WANTS* to be part of it. It's not a prize for people. In fact, I consider core reviewers to be volunteers and their job is infinitely thanked. +1000 Core reviewing has always be designed to be a duty, not a badge. There has been a trends toward making it a badge, with some companies giving bonuses to core reviewers, and HP making +2 pins and throwing +2 parties. I think that's a significant mistake and complained about it, but then my influence only goes that far. The problem with special rights (like +2) is that if you don't
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On 11/02/15 11:31 +, Kuvaja, Erno wrote: ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? This is tough call ... ~ real time communication is just so much more efficient. You can get things done in minutes that would take hours days to deal with over e-mail. As I mentioned, I don't think there's anything wrong with a quick chat fo sort small issues out that don't have a huge impact on the project. However, those communications shouldn't be considered the ultimate decision for things that will happen in the project. A good example is the #openstack-glance channe, which you decided to leave since we enabled logging. If we need to discuss something outside the meeting - or start a discussion that simple won't fit in a meeting - I'd need to choose between IRC discussions or mailing list. I'll obviously choose the mailing list because I would hate it to reach a consensus without listenting to your thoughts. If m-l is not used, you'll likely share your opinion and that *WILL* slow down the process anyway - a discussion that should've followed a different path. It also does not help that the -dev mailing list is really crowded, the tags are not consistent (sorry for finger pointing but oslo seems to be specially inconsistent with some tagging [oslo] some tagging [oslo.something] etc. Please keep that [oslo] there ;D ). In the case of oslo.messaging, we do this because we actually have different groups depending on the project. We have a oslo-core team and a oslo.messaging-core team. This encourages contributions on topics that folks care about. I don't think there's anything bad about that, just use filters. I would not discourage people to use irc or other communication means, just being prepared to answer those questions again. I'm discouraging the usage of IRC as the *main* communication channel. I really hope no one, across the gazillion of projects I'm part of, is expecting me to be present at every time on every channel (although I am thanks to ZNC). That's phisically impossible, hence emails. ## Cores are *NOT* special At some point, for some reason that is unknown to me, this message changed and the feeling of core's being some kind of superheros became a thing. It's gotten far enough to the point that I've came to know that some projects even have private (flagged with +s), password protected, irc channels for core reviewers. This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. Here I do disagree. There is stuff called private bugs for security etc. that _should_ kept private. Again speeds up progress hugely when the discussion does not need to happen in Launchpad and it keeps the bug itself cleaner as well. I do agree that there should not be secret society making common decisions behind closed doors, but there is reasons to keep some details initially between closed group only. And most commonly that closed group seems to be cores. Note that my complain is about private core channels used for general discussion. However, since you've brought the CVE thing up, lemme disagree with you. CVE discussions should be kept in the LP bug as well. Do you want to have a quick chat with someone about a bug? Sure, go ahead. Afterwards, you MUST get back to the LP bug and provide the feedback there. Otherwise, you just broke the process and other folks that weren't part of that conversation will be out. Also, must core-sec teams have some core members in them but not *all* of them, which means the super secure core channel is just bullshit. Random channels with obscured names created in a per-bug basis would be even more secure than the super secure channel with +s and password protected (yes, I just made this up). If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other non-core members in those discussions. Remember that the core flag is granted because of
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On Wed, 11 Feb 2015, Chris Dent wrote: I think it is time we recognize and act on the fact that the corporate landlords that pay many of us to farm on this land need to provide more resources. In case it wasn't clear, by this I don't mean project managers and other styles of enterprisey hoopaa joop. I mean more testing rigs and more supported community members. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On Wed, Feb 11, 2015 at 12:53:24PM +0100, Nikola Đipanov wrote: + Inf for writing this Flavio! Absolutely! I never even knew such things existed in this community. Only some observations below. [. . .] ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? I think the above 2 are somewhat intertwined with another trend in the community I've personally noticed towards the end of the Juno cycle, that I also strongly believe needs to DIAFF. An idea that it is possible to manage and open source community using similar methods that are commonly used for managing subordinates in a corporate hierarchy. That intention/notion to let's manage the community like a team just as we do at good old $company should be absolutely demolished! People who advocate such behavior are using ridiculously outdated brain models and should drop what they're doing immediately and do some introspection. -- /kashyap __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Feature Freeze Exception for Add config drive support for PCS containers
Hello, I'd like to request a feature freeze exception for the change [1] This change implements configuration drive support in Parallels containers. It does not change existing Nova behaviour. It's a last patch in parallels series, that implements blueprint pcs-support [2]. Previous patches of that blueprint were merged. So it's the last one to implement initial Parallels Cloud Server support in Nova compute driver. This change was reviewed by Daniel Berrange and Garry Kotton. I am looking forward for your decision about considering this changes for a feature freeze exception Thanks a lot! [1] https://review.openstack.org/#/c/149253/ [2] https://blueprints.launchpad.net/nova/+spec/pcs-support -- Regards, Alexander Burluka __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On 02/11/2015 04:55 AM, Flavio Percoco wrote: Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. If you are discussing something on IRC that requires the attention of most of your project's community, I highly recommend you to use the mailing list as oppose to pinging everyone independently and fighting with time zones. Using IRC bouncers as a replacement for something that should go to the mailing list is absurd. Please, use the mailing list and don't be afraid of having a bigger community chiming in in your discussion. *THAT'S A GOOD THING* Changes, specs, APIs, etc. Everything is good for the mailing list. We've fought hard to make this community grow, why shouldn't we take advantage of it? ## Cores are *NOT* special At some point, for some reason that is unknown to me, this message changed and the feeling of core's being some kind of superheros became a thing. It's gotten far enough to the point that I've came to know that some projects even have private (flagged with +s), password protected, irc channels for core reviewers. This is the point where my good faith assumption skill falls short. Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K? THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO DISCUSS. I'm kind of floored to find out that password protected irc channels exist. That actually violates our base tenants of being an OpenStack project, so is grounds for removing the project from OpenStack. If anything core reviewers should be the ones *FORCING* - it seems that *encouraging* doesn't have the same effect anymore - *OPENNESS* in order to include other non-core members in those discussions. Remember that the core flag is granted because of the reviews that person has provided and because that individual *WANTS* to be part of it. It's not a prize for people. In fact, I consider core reviewers to be volunteers and their job is infinitely thanked. Since, All generalizations are false, including this one. - Mark Twain, I'm pretty sure there are folks that disagree with the above. If you do, I care about your thoughts. This is worth discussing and fighting for. All the above being said, I'd like to thank everyone who fights for the openness of our community and encourage everyone to make that a must have thing in each sub-community. You don't need to be core-reviewer or PTL to do so. Speak up and help keeping the community as open as possible. Cheers, Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague
Re: [openstack-dev] [QA] Question about EC2 Tempest tests
Thanks, Alexandre! Regards, Yaroslav Lobankov. On Wed, Feb 11, 2015 at 2:58 PM, Alexandre Levine alev...@cloudscaling.com wrote: Yaroslav, The bug: https://bugs.launchpad.net/nova/+bug/1410622 And the review: https://review.openstack.org/#/c/152112/ It's recently fixed. Best regards, Alex Levine On 2/11/15 2:23 PM, Yaroslav Lobankov wrote: Hello everyone, I have some question about EC2 Tempest tests. When I run these tests, I regularly have the same error for all tests: EC2ResponseError: EC2ResponseError: 400 Bad Request ?xml version=1.0? ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not provided/Message/Error/Errors My environment is OpenStack (the Juno release) deployed by Fuel 6.0. Tempest is from master branch. I found that the issue was related to boto (Tempest installs it into virtual environment as a dependency). The last available release of boto is 2.36.0. When this version of boto is installed, EC2 tests don't work. But if I install boto 2.34.0 instead of 2.36.0, all EC2 tests will have success. Any thoughts? Regards, Yaroslav Lobankov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera
On 10/02/15 18:29, Jay Pipes wrote: On 02/10/2015 09:47 AM, Matthew Booth wrote: On 09/02/15 18:15, Jay Pipes wrote: On 02/09/2015 01:02 PM, Attila Fazekas wrote: I do not see why not to use `FOR UPDATE` even with multi-writer or Is the retry/swap way really solves anything here. snip Am I missed something ? Yes. Galera does not replicate the (internal to InnnoDB) row-level locks that are needed to support SELECT FOR UPDATE statements across multiple cluster nodes. https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ Is that the right link, Jay? I'm taking your word on the write-intent locks not being replicated, but that link seems to say the opposite. This link is better: http://www.percona.com/blog/2014/09/11/openstack-users-shed-light-on-percona-xtradb-cluster-deadlock-issues/ Specifically the line: The local record lock held by the started transation on pxc1 didn’t play any part in replication or certification (replication happens at commit time, there was no commit there yet). Thanks, Jay, that's a great article. Based on that, I think I may have misunderstood what you were saying before. I currently understand that the behaviour of select ... for update is correct on Galera, it's just not very efficient. Correct in this case meaning it aborts the transaction due to a correctly detected lock conflict. FWIW, that was pretty much my original understanding, but without the detail. To expand: Galera doesn't replicate write intent locks, but it turns out it doesn't have to for correctness. The reason is that the conflict between a local write intent lock and a remote write, which is replicated, will always be detected during or before local certification. Matt -- Matthew Booth Red Hat Engineering, Virtualisation Team Phone: +442070094448 (UK) GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On 02/11/2015 05:52 AM, Daniel P. Berrange wrote: On Wed, Feb 11, 2015 at 10:55:18AM +0100, Flavio Percoco wrote: Greetings all, During the last two cycles, I've had the feeling that some of the things I love the most about this community are degrading and moving to a state that I personally disagree with. With the hope of seeing these things improve, I'm taking the time today to share one of my concerns. Since I believe we all work with good faith and we *all* should assume such when it comes to things happening in our community, I won't make names and I won't point fingers - yes, I don't have enough fingers to point based on the info I have. People that fall into the groups I'll mention below know that I'm talking to them. This email is dedicated to the openness of our community/project. ## Keep discussions open I don't believe there's anything wrong about kicking off some discussions in private channels about specs/bugs. I don't believe there's anything wrong in having calls to speed up some discussions. HOWEVER, I believe it's *completely* wrong to consider those private discussions sufficient. If you have had that kind of private discussions, if you've discussed a spec privately and right after you went upstream and said: This has been discussed in a call and it's good to go, I beg you to stop for 2 seconds and reconsider that. I don't believe you were able to fit all the community in that call and that you had enough consensus. With the timezones of our worldwide contributors it is pretty much guaranteed that any realtime phone call will have excluded a part of our community. Furthermore, you should consider that having private conversations, at the very end, doesn't help with speeding up discussions. We've a community of people who *care* about the project they're working on. This means that whenever they see something that doesn't make much sense, they'll chime in and ask for clarification. If there was a private discussion on that topic, you'll have to provide the details of such discussion and bring that person up to date, which means the discussion will basically start again... from scratch. I can see that if people have reached an impass in discussions via email or irc, it is sometimes helpful to have a call to break a roadblock. I absolutely agree though that the results of any such calls should not be presented as a final decision. At the very least it is neccessary to describe the rationale for the POV obtained as a result of the call, and give the broader community the opportunity to put forward counterpoints if required. We should certainly not just say 'its good to go' and +A sommething based on a private call. ## Mailing List vs IRC Channel I get it, our mailing list is freaking busy, keeping up with it is hard and time consuming and that leads to lots of IRC discussions. I don't think there's anything wrong with that but I believe it's wrong to expect *EVERYONE* to be in the IRC channel when those discussions happen. Again, timezones. It is a physical impossibility for most people to be on IRC for more than 8 hours a day, so that's only 1/3 of the day that any signle person will likely be on IRC. And no, expecting people to have a permanently connected IRC proxy and then read the other 16 hours of logs each morning is not a solution. Personally I've stopped joining IRC most the time regardless, because I feel I am far more productive when I'm not being interrupted with IRC pings every 20 minutes. There should be few things so urgent that they can't be dealt with over email. Again because of our timezone differences we should be wary of making important decisions in a rush - anything remotely non-trivial should have at least a 24 hour window to allow people on all timezones a chance to see the point and join in discussion. IRC is mostly not about discussions, it's about discussion, context, team building, and trust. And it's a cross organization open forum for that. If core team members start dropping off external IRC where they are communicating across corporate boundaries, then the local tribal effects start taking over. You get people start talking about the upstream as them. The moment we get into us vs. them, we've got a problem. Especially when the upstream project is them. So while I agree, I'd personally get a ton more done if I didn't make myself available to answer questions or help sort out misunderstandings people were having with things I'm an expert in, doing so would definitely detrimentally impact the project as a whole. So I find it an unfortunate decision for a core team member. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it
On 02/11/2015 02:13 PM, Sean Dague wrote: If core team members start dropping off external IRC where they are communicating across corporate boundaries, then the local tribal effects start taking over. You get people start talking about the upstream as them. The moment we get into us vs. them, we've got a problem. Especially when the upstream project is them. A lot of assumptions being presented as fact here. I believe the technical term for the above is 'slippery slope fallacy'. We can and _must_ do much better than this on this mailing list! Let's drag the discussion level back up! N. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha
On Tue, Feb 10, 2015 at 1:51 AM, Li, Chen chen...@intel.com wrote: Hi list, I’m trying to understand how manila use NFS-Ganesha, and hope to figure out what I need to do to use it if all patches been merged (only one patch is under reviewing, right ?). I have read: https://wiki.openstack.org/wiki/Manila/Networking/Gateway_mediated https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha From documents, it is said, within Ganesha, multi-tenancy would be supported: *And later the Ganesha core would be extended to use the infrastructure used by generic driver to provide network separated multi-tenancy. The core would manage Ganesha service running in the service VMs, and the VMs themselves that reside in share networks.* ð it is said : *extended to use the infrastructure used by generic driver to provide network separated multi-tenancy* So, when user create a share, a VM (share-server) would be created to run Ganesha-server. ð I assume this VM should connect the 2 networks : user’s share-network and the network where Glusterfs cluster is running. But, in generic driver, it create a manila service network at beginning. When user create a share, a “subnet” would be created in manila service network corresponding to each user’s “share-network”: This means every VM(share-server) generic driver has created are living in different subnets, they’re not able to connect to each other. When you say VM, its confusing, whether you are referring to service VM or tenant VM. Since you are also saying share-server, I presume you mean service VM! IIUC each share-server VM (also called service VM) is serving all VMs created by a tenant. In other words, generic driver creates 1 service VM per tenant, and hence serves all the VMs (tenant VMs) created by that tenant Manila experts on the list can correct me if I am wrong here. Generic driver creates service VM (if not already present for that tenant) as part of creating a new share and connect the tenant network to the service VM network via neutron router (creates ports on the router which helps connect the 2 different subnets), thus the tenant VMs can ping/access the service VM. There is no question and/or need to have 2 service VMs talk to each other, because they are serving different tenants, thus they need to be isolated! If my understanding here is correct, the VMs that running Ganesha are living the different subnets too. ð Here is my question: How VMs(share-servers) running Ganesha be able to connect to the single Glusterfs cluster ? Typically GlusterFS will be deployed on storage nodes (by storage admin) that are NOT part of openstack. So having the share-server talk/connect with GlusterFS is equivalent to saying Allow openstack VM to talk with non-openstack nodes, in other words Connect the neutron network to non-neutron network (also called provider/host network). This is achieved by ensuring your openstack Network node is configured to forward tenant traffic to provider network, which involves neutron skills and some neutron black magic :) To know what this involves, pls see section Setup devstack networking to allow Nova VMs access external/provider network in my blog @ http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html This should be taken care by your openstack network admin who should configure the openstack network node to allow this to happen, this isn't a Manila / GlusterFS driver responsibility, rather its an openstack deployment option thats taken care by the network admins during openstack deployment. *Disclaimer: I am not neutron expert, so feel free to correct/update me* HTH, thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha
On Thu, Feb 12, 2015 at 6:41 AM, Li, Chen chen...@intel.com wrote: Hi Deepak, Ø When you say VM, its confusing, whether you are referring to service VM or Ø tenant VM. Since you are also saying share-server, I presume you mean Ø service VM! Ø IIUC each share-server VM (also called service VM) is serving all VMs Ø created by a tenant. In other words, generic driver creates 1 service VM Ø per tenant, and hence serves all the VMs (tenant VMs) created by that tenant Ø Manila experts on the list can correct me if I am wrong here. Generic Ø driver creates service VM (if not already present for that tenant) as part Ø of creating a new share and connect the tenant network to the service VM Ø network via neutron router (creates ports on the router which helps connect Ø the 2 different subnets), thus the tenant VMs can ping/access the service Ø VM. There is no question and/or need to have 2 service VMs talk to each Ø other, because they are serving different tenants, thus they need to be Ø isolated! Sorry for the bad expression, yes, I mean service VM. I don’t agree with “each share-server VM (also called service VM) is serving all VMs created by a tenant”. Because from my practices , 1 service VM is created for 1 share-network. A share-network - A service VM - shares which are created with the same “share-network”. You are probably right, I don't remember the insides of share-network now, but I always created 1 share-network, so i always had the notion of 1 service VM per tenant. A tenant(the tenant concept in keystone) can has more than one share-networks, even a same neutron network subnet can be “registered” to several share-networks if user do want to do that. Actually, I didn’t see strong connections between manila shares and tenant (the concept in keystone), but this is other topics then. But, I think I get your point about service VMs need to be isolated. I agree with that. Ø Typically GlusterFS will be deployed on storage nodes (by storage admin) Ø that are NOT part of openstack. So having the share-server talk/connect Ø with GlusterFS is equivalent to saying Allow openstack VM to talk with Ø non-openstack nodes, in other words Connect the neutron network to Ø non-neutron network (also called provider/host network). This is the part I disagree. What exactly do you disagree here ? Ø This is achieved by ensuring your openstack Network node is configured to Ø forward tenant traffic to provider network, which involves neutron skills Ø and some neutron black magic :) Ø To know what this involves, pls see section Setup devstack networking to Ø allow Nova VMs access external/provider network in my blog @ Ø http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html Ø This should be taken care by your openstack network admin who should Ø configure the openstack network node to allow this to happen, this isn't a Ø Manila / GlusterFS driver responsibility, rather its an openstack Ø deployment option thats taken care by the network admins during openstack Ø deployment. What I want to do is enable GluserFS with Manila with Ganesha in my environment. I’m working as a cloud admin. So, what I expecting is, 1. I need to prepare a GlusterFS cluster 2. I need to prepare images and other stuff for service VM Right now, i think all we support is running Ganesha inside the GlusterFS server node only. I don't think we have qualified the scenario where Ganesha is running in service VM. The Blueprint talks about doing this in near future. Ccing Csaba and Ramana who are the right folks to comment more on this. 3. I need to configure my GluserFS cluster’s information (IPs, volumes) into manila.conf ð All things can work if I start Manila now, Yeah ! The only thing I know is manila would create VMs to connect to my GlusterFS cluster. Currently, the neutron network subnet where service VMs work is created by Manila. Manila called them service_network service_subnet. So, I don’t think it is possible for me to configure the network before I create shares. service_network and service_subnet is pre-created i thought ? Even if it isn't you can bridge the service_network with provider network after the service_network is created (Ideally it should have been pre-created) Another problem is there is no router I can used to let service_network connected to GlusterFS cluster. Because service_subnet are already connected to user’s router ( if connect_share_server_to_tenant_network = False) If you read my blog, it talks about connecting tenant network to GlusterFS cluster which is on the host/provider network For your case, it maps to connecting service VM (service_network and service_subnet) to GlusterFS cluster. You can either use the existign router or create a new router and have it connect the
Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha
Yes, I’m asking about plans for gateway-mediated-with-ganesha. I want to know what would you do to achieve “And later the Ganesha core would be extended to use the infrastructure used by generic driver to provide network separated multi-tenancy. The core would manage Ganesha service running in the service VMs, and the VMs themselves that reside in share networks.” Because after I studied current infrastructure of generic driver, I guess directly use it for Ganesha would now work. This is what I have learned from code: Manila create service_network and service_subnet based on configurations in manila.conf: service_network_name = manila_service_network service_network_cidr = 10.254.0.0/16 service_network_division_mask = 28 service_network is created when manila-share service start. service_subnet is created when manila-share service get a share create command, and no share-server exists for current share-network. ð Service_subnet create at the same time as share-server created. Yes, it can be pre-created, manila would check if subnet with no name already exist. Thanks. -chen From: Deepak Shetty [mailto:dpkshe...@gmail.com] Sent: Thursday, February 12, 2015 2:09 PM To: Li, Chen Cc: OpenStack Development Mailing List (not for usage questions) (openstack-dev@lists.openstack.org) Subject: Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha On Thu, Feb 12, 2015 at 6:41 AM, Li, Chen chen...@intel.commailto:chen...@intel.com wrote: Hi Deepak, When you say VM, its confusing, whether you are referring to service VM or tenant VM. Since you are also saying share-server, I presume you mean service VM! IIUC each share-server VM (also called service VM) is serving all VMs created by a tenant. In other words, generic driver creates 1 service VM per tenant, and hence serves all the VMs (tenant VMs) created by that tenant Manila experts on the list can correct me if I am wrong here. Generic driver creates service VM (if not already present for that tenant) as part of creating a new share and connect the tenant network to the service VM network via neutron router (creates ports on the router which helps connect the 2 different subnets), thus the tenant VMs can ping/access the service VM. There is no question and/or need to have 2 service VMs talk to each other, because they are serving different tenants, thus they need to be isolated! Sorry for the bad expression, yes, I mean service VM. I don’t agree with “each share-server VM (also called service VM) is serving all VMs created by a tenant”. Because from my practices , 1 service VM is created for 1 share-network. A share-network - A service VM - shares which are created with the same “share-network”. You are probably right, I don't remember the insides of share-network now, but I always created 1 share-network, so i always had the notion of 1 service VM per tenant. A tenant(the tenant concept in keystone) can has more than one share-networks, even a same neutron network subnet can be “registered” to several share-networks if user do want to do that. Actually, I didn’t see strong connections between manila shares and tenant (the concept in keystone), but this is other topics then. But, I think I get your point about service VMs need to be isolated. I agree with that. Typically GlusterFS will be deployed on storage nodes (by storage admin) that are NOT part of openstack. So having the share-server talk/connect with GlusterFS is equivalent to saying Allow openstack VM to talk with non-openstack nodes, in other words Connect the neutron network to non-neutron network (also called provider/host network). This is the part I disagree. What exactly do you disagree here ? This is achieved by ensuring your openstack Network node is configured to forward tenant traffic to provider network, which involves neutron skills and some neutron black magic :) To know what this involves, pls see section Setup devstack networking to allow Nova VMs access external/provider network in my blog @ http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html This should be taken care by your openstack network admin who should configure the openstack network node to allow this to happen, this isn't a Manila / GlusterFS driver responsibility, rather its an openstack deployment option thats taken care by the network admins during openstack deployment. What I want to do is enable GluserFS with Manila with Ganesha in my environment. I’m working as a cloud admin. So, what I expecting is, 1. I need to prepare a GlusterFS cluster 2. I need to prepare images and other stuff for service VM Right now, i think all we support is running Ganesha inside the GlusterFS server node only. I don't think we have qualified the scenario where Ganesha is running in service VM. The Blueprint talks about doing
Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha
Hi Deepak, Ø When you say VM, its confusing, whether you are referring to service VM or Ø tenant VM. Since you are also saying share-server, I presume you mean Ø service VM! Ø IIUC each share-server VM (also called service VM) is serving all VMs Ø created by a tenant. In other words, generic driver creates 1 service VM Ø per tenant, and hence serves all the VMs (tenant VMs) created by that tenant Ø Manila experts on the list can correct me if I am wrong here. Generic Ø driver creates service VM (if not already present for that tenant) as part Ø of creating a new share and connect the tenant network to the service VM Ø network via neutron router (creates ports on the router which helps connect Ø the 2 different subnets), thus the tenant VMs can ping/access the service Ø VM. There is no question and/or need to have 2 service VMs talk to each Ø other, because they are serving different tenants, thus they need to be Ø isolated! Sorry for the bad expression, yes, I mean service VM. I don’t agree with “each share-server VM (also called service VM) is serving all VMs created by a tenant”. Because from my practices , 1 service VM is created for 1 share-network. A share-network - A service VM - shares which are created with the same “share-network”. A tenant(the tenant concept in keystone) can has more than one share-networks, even a same neutron network subnet can be “registered” to several share-networks if user do want to do that. Actually, I didn’t see strong connections between manila shares and tenant (the concept in keystone), but this is other topics then. But, I think I get your point about service VMs need to be isolated. I agree with that. Ø Typically GlusterFS will be deployed on storage nodes (by storage admin) Ø that are NOT part of openstack. So having the share-server talk/connect Ø with GlusterFS is equivalent to saying Allow openstack VM to talk with Ø non-openstack nodes, in other words Connect the neutron network to Ø non-neutron network (also called provider/host network). This is the part I disagree. Ø This is achieved by ensuring your openstack Network node is configured to Ø forward tenant traffic to provider network, which involves neutron skills Ø and some neutron black magic :) Ø To know what this involves, pls see section Setup devstack networking to Ø allow Nova VMs access external/provider network in my blog @ Ø http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html Ø This should be taken care by your openstack network admin who should Ø configure the openstack network node to allow this to happen, this isn't a Ø Manila / GlusterFS driver responsibility, rather its an openstack Ø deployment option thats taken care by the network admins during openstack Ø deployment. What I want to do is enable GluserFS with Manila with Ganesha in my environment. I’m working as a cloud admin. So, what I expecting is, 1. I need to prepare a GlusterFS cluster 2. I need to prepare images and other stuff for service VM 3. I need to configure my GluserFS cluster’s information (IPs, volumes) into manila.conf ð All things can work if I start Manila now, Yeah ! The only thing I know is manila would create VMs to connect to my GlusterFS cluster. Currently, the neutron network subnet where service VMs work is created by Manila. Manila called them service_network service_subnet. So, I don’t think it is possible for me to configure the network before I create shares. Another problem is there is no router I can used to let service_network connected to GlusterFS cluster. Because service_subnet are already connected to user’s router ( if connect_share_server_to_tenant_network = False) Thanks. -chen From: Deepak Shetty [mailto:dpkshe...@gmail.com] Sent: Thursday, February 12, 2015 1:24 PM To: Li, Chen Subject: Fwd: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha Li Chen, Fwdign it to you , since u didn't recieve the below mail to your outlook. Hope you get this one! While responding pls Cc the openstack-dev list, so that the discussion can continue on the public list thanx, deepak -- Forwarded message -- From: Deepak Shetty dpkshe...@gmail.commailto:dpkshe...@gmail.com Date: Wed, Feb 11, 2015 at 2:31 PM Subject: Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org On Tue, Feb 10, 2015 at 1:51 AM, Li, Chen chen...@intel.commailto:chen...@intel.com wrote: Hi list, I’m trying to understand how manila use NFS-Ganesha, and hope to figure out what I need to do to use it if all patches been merged (only one patch is under reviewing, right ?). I have read: https://wiki.openstack.org/wiki/Manila/Networking/Gateway_mediated
Re: [openstack-dev] [nova][api] How to handle API changes in contrib/*.py
Hi Claudiu, 2015-02-03 7:51 GMT+09:00 Claudiu Belu cb...@cloudbasesolutions.com: There have been some discussion on what nova-api should return after a change in the API itself. So, the change that generated this discussion is an API change to 2.2 is: https://review.openstack.org/#/c/140313/23 - **2.2** Added Keypair type. A user can request the creation of a certain 'type' of keypair (ssh or x509). If no keypair type is specified, then the default 'ssh' type of keypair is created. Currently, this change was done on plugins/v3/keypairs.py, so now, the 2.2 version will also return the keypair type on keypair-list, keypair-show, keypair-create. Microversioning was used, so this behaviour is valid only if the user requests the 2.2 version. Version 2.1 will not accept keypair type as argument, nor will return the keypair type. The above behavior seems reasonable from experience of microversions discussion. Now, the main problem is contrib/keypairs.py, microversioning cannot be applied there. The current commit filters the keypair type, it won't be returned. But there have been reviews stating that returning the keypair type is a back-compatible change. Before this, there was a review stating that keypair type should not be returned. So, finally, my question is: how should the API change be handled in contrib/keypairs.py? I think v2 API(contrib/keypairs.py) should not return the keypair type basically. Before microversions, when adding new parameters which is backwards *compatible*, we need to add dummy extension for noticing the change to clients. For example, you can see server_group_quotas[1] extension which is almost empty and base extension server_group switches its behavior based on server_group_quotas extension loading condition. [2] So if keypair type needs to be returned, we need to add this kind of dummy extension to v2(under contrib/). but this development manner makes non-important code. That is one of reasons why we are implementing microversions for avoiding dummy extensions. Thanks Ken Ohmichi --- [1]: https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/server_group_quotas.py [2]: https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/server_groups.py#L196 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [congress] following up on releasing kilo milestone 2
so git checkout master git pull https://git.openstack.org/stackforge/congress.git dbef982ea72822e0b7acc16da9b6ac89d3cf3530 git tag -s 2015.1.0b2 git push gerrit 2015.1.0b2 On Wed, Feb 11, 2015 at 5:26 PM, James E. Blair cor...@inaugust.com wrote: sean roberts seanrobert...@gmail.com writes: At our last meeting I volunteered to figure out what we need to do for milestone 2. Here is what I propose: Repo cleanup: We tag today's release with 2015.1.0b2. This is copying tag the other openstack projects are using. I believe below is the correct syntax. Let me know if there is a cleaner way and/or if setting the head pointer to a specific commit is more accurate. git checkout master git pull --ff-only git tag -s 2015.1.0b2 git push gerrit 2015.1.0b2 Yes, this matches: http://docs.openstack.org/infra/manual/drivers.html#tagging-a-release But yes, if you want to tag a specific commit instead of the tip of master, you can do that. -Jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- ~ sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper
Hi Tim, It would be great to meet with members of the Congress project if possible at our meetup in Santa Rosa. I plan by then to have a basic understanding of Congress and some test driver apps / use cases to demo at the meetup. The goal is to assess the current state of Congress support for the use cases on the OPNFV wiki: https://wiki.opnfv.org/copper/use_cases I would be doing the same with ODL but I'm not as far on getting ready with it. So the opportunity to discuss the use cases under Copper and the other policy-related projects (fault managementhttps://wiki.opnfv.org/doctor, resource managementhttps://wiki.opnfv.org/promise, resource schedulerhttps://wiki.opnfv.org/requirements_projects/resource_scheduler) with Congress experts would be great. Once we understand the gaps in what we are trying to build in OPNFV, the goal for our first OPNFV release is to create blueprints for new work in Congress. We might also just find some bugs and get directly involved in Congress to address them, or start a collaborative development project in OPNFV for that. TBD Thanks, Bryan Sullivan | Service Standards | ATT From: Tim Hinrichs [mailto:thinri...@vmware.com] Sent: Wednesday, February 11, 2015 10:22 AM To: OpenStack Development Mailing List (not for usage questions) Cc: SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang Subject: Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper Hi Zhipeng, We'd be happy to meet. Sounds like fun! I don't know of anyone on the Congress team who is planning to attend the LF collaboration summit. But we might be able to send a couple of people if it's the only real chance to have a face-to-face. Otherwise, there are a bunch of us in and around Palo Alto. And of course, phone/google hangout/irc are fine options as well. Tim On Feb 11, 2015, at 8:59 AM, Zhipeng Huang zhipengh...@gmail.commailto:zhipengh...@gmail.com wrote: Hi Congress Team, As you might already knew, we had a project in OPNFV covering deployment policy called Copperhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copperd=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=R1ER1wU47Knv6PaOiamDwCm76pwx5uuE47mpn_03mzYs=S7VfJALm_Pmzb2S-o3NUlcNzLAy9yYceGzcyKX3CA-we=, in which we identify Congress as one of the upstream projects that we need to put our requirement to. Our team has been working on setting up a simple openstack environment with congress integrated that could demo simple use cases for policy deployment. Would it possible for you guys and our team to find a time do an Copper/Congress interlock meeting, during which Congress Team could introduce how to best integrate congress with vanilla openstack? Will some of you attend LF Collaboration Summit? Thanks a lot :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.commailto:huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edumailto:zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] openstack/requirements failure
Thanks Ben Nemec. I have proposed the changes https://review.openstack.org/154989. -Original Message- From: Ben Nemec [mailto:openst...@nemebean.com] Sent: Wednesday, February 11, 2015 11:25 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] openstack/requirements failure On 02/11/2015 11:33 AM, Manickam, Kanagaraj wrote: Hi In horizon I am trying to increase the python-heatclient=0.3.0 as part of the review https://review.openstack.org/#/c/154952/ and its failed with following error: Requirement python-heatclient=0.3.0 does not match openstack/requirements value python-heatclient=0.2.9 More details at https://jenkins03.openstack.org/job/gate-horizon-requirements/110/console Could anyone help me to resolve this issue? Thanks. Requirements changes have to be proposed to global requirements first. There's a full explanation here: http://git.openstack.org/cgit/openstack/requirements/tree/README.rst __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha
On Thu, Feb 12, 2015 at 7:32 AM, Li, Chen chen...@intel.com wrote: Yes, I’m asking about plans for gateway-mediated-with-ganesha. I want to know what would you do to achieve “*And later the Ganesha core would be extended to use the infrastructure used by generic driver to provide network separated multi-tenancy. The core would manage Ganesha service running in the service VMs, and the VMs themselves that reside in share networks.*” Because after I studied current infrastructure of generic driver, I guess directly use it for Ganesha would now work. You may be right, but we cannot be sure until we test, qualify, validate against a real setup. Also there is no infrastrucutre to run Ganesha in service VM, so the major work would be to bundle Ganesha and make it available as a service VM image and use that image instead of the existing service VM image. Csaba and Ramana (in CC) can comment more on this. This is what I have learned from code: Manila create service_network and service_subnet based on configurations in manila.conf: *service_network_name = manila_service_network* service_network_cidr = 10.254.0.0/16 So even if the service_network or service_subnet is not created, this information from the conf file can be used by the network admin to bridge/connect the service network (whenever it comes up) with the host/provider network. service_network_division_mask = 28 service_network is created when manila-share service start. service_subnet is created when manila-share service get a share create command, and no share-server exists for current share-network. ð Service_subnet create at the same time as share-server created. Thanks for clarifying. thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev