Re: [openstack-dev] [Cinder] Need +A (workflow +1) for https://review.openstack.org/156940
On 03/03/2015 02:17 AM, Deepak Shetty wrote: Hi all, Can someone give +A to https://review.openstack.org/156940 - we have the rest. Need to get this merged for glusterfs CI to pass the snapshot_when_volume_in_use testcases. thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Do not request reviews on the mailing list. Please spend time in the project channel to which you wish to contribute and discuss patch status in there. Thank you, Anita. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Intended behavior for instance.host on reschedule?
Hi Folks, I was wondering if anyone can comment on the intended behavior of how instance.host is supported to be set during reschedule operations. For example, take this scenario: 1. Assume an environment with a single host… call it host-1 2. Deploy a VM, but force an exception in the spawn path somewhere to simulate some hypervisor error” 3. The scheduler correctly attempts to reschedule the VM, and ultimately ends up (correctly) with a NoValidHost error because there was only 1 host 4. However, the instance.host (e.g., [nova show vm]) is still showing ‘host-1’ — is this the expected behavior? It seems like perhaps the claim should be reverted (read: instance.host nulled out) when we take the exception path during spawn in step #2 above, but maybe I’m overlooking something? This behavior was observed on a Kilo base from a couple weeks ago, FWIW. Thoughts/comments? Thanks, Joe __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?
On Mar 3, 2015, at 8:34 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: I’m pretty sure it has always done this: leave the host set on the final scheduling attempt. I agree that this could be cleared which would free up room for future scheduling attempts. Thanks Vish for the comment. Do we know if this is an intended feature or would we consider this a bug? It seems like we could free this up, as you said, to allow room for additional VMs, especially since we know it didn’t successfully deploy anyway? Vish On Mar 3, 2015, at 12:15 AM, Joe Cropper cropper@gmail.com wrote: Hi Folks, I was wondering if anyone can comment on the intended behavior of how instance.host is supported to be set during reschedule operations. For example, take this scenario: 1. Assume an environment with a single host… call it host-1 2. Deploy a VM, but force an exception in the spawn path somewhere to simulate some hypervisor error” 3. The scheduler correctly attempts to reschedule the VM, and ultimately ends up (correctly) with a NoValidHost error because there was only 1 host 4. However, the instance.host (e.g., [nova show vm]) is still showing ‘host-1’ — is this the expected behavior? It seems like perhaps the claim should be reverted (read: instance.host nulled out) when we take the exception path during spawn in step #2 above, but maybe I’m overlooking something? This behavior was observed on a Kilo base from a couple weeks ago, FWIW. Thoughts/comments? Thanks, Joe __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])
From: John Griffith [mailto:john.griffi...@gmail.com] Sent: 03 March 2015 14:46 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics]) On Tue, Mar 3, 2015 at 7:18 AM, Kuvaja, Erno kuv...@hp.commailto:kuv...@hp.com wrote: -Original Message- From: Thierry Carrez [mailto:thie...@openstack.orgmailto:thie...@openstack.org] Sent: 03 March 2015 10:00 To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics]) Doug Wiegley wrote: [...] But I think some of the push back in this thread is challenging this notion that abandoning is negative, which you seem to be treating as a given. I don't. At all. And I don't think I'm alone. I was initially on your side: the abandoned patches are not really deleted, you can easily restore them. So abandoned could just mean inactive or stale in our workflow, and people who actually care can easily unabandon them to make them active again. And since abandoning is currently the only way to permanently get rid of stale / -2ed / undesirable changes anyway, so we should just use that. But words matter, especially for new contributors. For those contributors, someone else abandoning a proposed patch of theirs is a pretty strong move. To them, abandoning should be their decision, not yours (reviewers can -2 patches). Launchpad used to have a similar struggle between real meaning and workflow meaning. It used to have a single status for rejected bugs (Invalid). In the regular bug workflow, that status would be used for valid bugs that you just don't want to fix. But then that created confusion to people outside that workflow since the wrong word was used. So WontFix was introduced as a similar closed state (and then they added Opinion because WontFix seemed too harsh, but that's another story). We have (like always) tension around the precise words we use. You say Abandon is generally used in our community to set inactive. Jim says Abandon should mean abandon and therefore should probably be left to the proposer, and other ways should be used to set inactive. There are multiple solutions to this naming issue. You can rename abandon so that it actually means set inactive or mark as stale. Or you can restrict abandon to the owner of a change, stop defaulting to is:open to list changes, and introduce features in Gerrit so that a is:active query would give you the right thing. But that query would need to be the Gerrit default, not some obscure query you can run or add to your dashboard -- otherwise we are back at step 1. -- Thierry Carrez (ttx) I'd like to ask few questions regarding this as I'm very much pro cleaning the review queues of abandoned stuff. How often people (committer/owner/_reviewer_) abandon changes actively? Now I do not mean the reviewer here only cores marking other peoples abandoned PSs as abandoned I mean how many times you have seen person stating that (s)he will not review a change anymore? I haven't seen that, but I've seen lots of changes where person has reviewed it on some early stage and 10 revisions later still not given ones input again. What I'm trying to say here is that it does not make the change any less abandoned if it's not marked abandoned by the owner. It's rarely active process. Regarding the contributor experience, I'd say it's way more harmful not to mark abandoned changes abandoned than do so. If the person really don't know and can't figure out how to a) join the mailing list b) get to irc c) write a comment to the change or d) reach out anyone in the project in any other means to express that (s)he does not know how to fix the issue flagged in weeks, I'm not sure if we will miss that person as a contributor so much either? And yes, the message should be strong telling that the change has passed the point it most probably will have no traction anymore and active action needs to be taken to continue the workflow. Same time lets turn this around. How many new contributors we drive away because of the reaction Whoa, this many changes have been sitting here for weeks, I have no chance to get my change quickly in? Specifically to Nova, Swift Cinder folks: How much do you see benefit on bug lifecycle management with the abandoning? I would assume bugs that has message their proposed fix abandoned getting way more traction than the ones where the fix has been stale in queue for weeks. And how many of those abandoned ones gets reactivated? Last I'd like to point out that life is full of disappointments. We should not try to keep our community in bubble where no-one ever gets disappointed nor their feelings never gets hurt. I do not appreciate that
Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?
On 03/03/2015 10:24 AM, Claudiu Belu wrote: Hello. I've talked with Christopher Yeoh yesterday and I've asked him about the microversions and when will they be able to merge. He said that for now, this commit had to get in before any other microversions: https://review.openstack.org/#/c/159767/ He also said that he'll double check everything, and if everything is fine, the first microversions should be getting in soon after. Best regards, Claudiu Belu I just merged that one this morning, so hopefully we can dislodge. From: Alexandre Levine [alev...@cloudscaling.com] Sent: Tuesday, March 03, 2015 4:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions? Bump. I'd really appreciate some answers to the question Sean asked. I still have the 2.4 in my review (the very one Sean mentioned) but it seems that it might not be the case. Best regards, Alex Levine On 3/2/15 2:30 PM, Sean Dague wrote: This change for the additional attributes for ec2 looks like it's basically ready to go, except it has the wrong microversion on it (as they anticipated the other changes landing ahead of them) - https://review.openstack.org/#/c/155853 What's the plan for merging the outstanding microversions? I believe we're all conceptually approved on all them, and it's an important part of actually moving forward on the new API. It seems like we're in a bit of a holding pattern on all of them right now, and I'd like to make sure we start merging them this week so that they have breathing space before the freeze. -Sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [devstack]Specific Juno version
Hi, Is there a way to specify the Juno version to be installed using devstack. For now we can only specify stable/juno *git clone -b stable/juno https://github.com/openstack-dev/devstack.git https://github.com/openstack-dev/devstack.git /opt/stack/devstack* but this installs 2014.2.3 which appears to be still under development (it gives some errors). How can we specify 2014.2.2 for components (e.g. cinder)? Thanks, Eduard -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.ma...@cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Specific Juno version
You can specify specific tags for each component in your local.conf file, e.g.: NOVA_BRANCH=2014.2 CINDER_BRANCH=2014.2 GLANCE_BRANCH=2014.2 HORIZON_BRANCH=2014.2 KEYSTONE_BRANCH=2014.2 KEYSTONECLIENT_BRANCH=2014.2 NOVACLIENT_BRANCH=2014.2 SWIFT_BRANCH=2014.2 HEAT_BRANCH=2014.2 HTH Alec From: Eduard Matei eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, March 3, 2015 at 7:42 AM To: OpenStack Development Mailing List (not for usage questions) OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org Subject: [openstack-dev] [devstack]Specific Juno version Hi, Is there a way to specify the Juno version to be installed using devstack. For now we can only specify stable/juno git clone -b stable/juno https://github.com/openstack-dev/devstack.git /opt/stack/devstack but this installs 2014.2.3 which appears to be still under development (it gives some errors). How can we specify 2014.2.2 for components (e.g. cinder)? Thanks, Eduard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [congress] missing today's meeting
I'm in an all day board meeting. My update is https://launchpad.net/congress/kilo is cleaned up. tagged https://github.com/stackforge/congress/tree/2015.1.0b2 is out. I am still working out getting the tarred file out to http://tarballs.openstack.org/ as part of the process. ~ sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])
On Tue, Mar 3, 2015 at 8:46 AM, John Griffith john.griffi...@gmail.com wrote: On Tue, Mar 3, 2015 at 7:18 AM, Kuvaja, Erno kuv...@hp.com wrote: -Original Message- From: Thierry Carrez [mailto:thie...@openstack.org] Sent: 03 March 2015 10:00 To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics]) Doug Wiegley wrote: [...] But I think some of the push back in this thread is challenging this notion that abandoning is negative, which you seem to be treating as a given. I don't. At all. And I don't think I'm alone. I was initially on your side: the abandoned patches are not really deleted, you can easily restore them. So abandoned could just mean inactive or stale in our workflow, and people who actually care can easily unabandon them to make them active again. And since abandoning is currently the only way to permanently get rid of stale / -2ed / undesirable changes anyway, so we should just use that. But words matter, especially for new contributors. For those contributors, someone else abandoning a proposed patch of theirs is a pretty strong move. To them, abandoning should be their decision, not yours (reviewers can -2 patches). Launchpad used to have a similar struggle between real meaning and workflow meaning. It used to have a single status for rejected bugs (Invalid). In the regular bug workflow, that status would be used for valid bugs that you just don't want to fix. But then that created confusion to people outside that workflow since the wrong word was used. So WontFix was introduced as a similar closed state (and then they added Opinion because WontFix seemed too harsh, but that's another story). We have (like always) tension around the precise words we use. You say Abandon is generally used in our community to set inactive. Jim says Abandon should mean abandon and therefore should probably be left to the proposer, and other ways should be used to set inactive. There are multiple solutions to this naming issue. You can rename abandon so that it actually means set inactive or mark as stale. Or you can restrict abandon to the owner of a change, stop defaulting to is:open to list changes, and introduce features in Gerrit so that a is:active query would give you the right thing. But that query would need to be the Gerrit default, not some obscure query you can run or add to your dashboard -- otherwise we are back at step 1. -- Thierry Carrez (ttx) I'd like to ask few questions regarding this as I'm very much pro cleaning the review queues of abandoned stuff. How often people (committer/owner/_reviewer_) abandon changes actively? Now I do not mean the reviewer here only cores marking other peoples abandoned PSs as abandoned I mean how many times you have seen person stating that (s)he will not review a change anymore? I haven't seen that, but I've seen lots of changes where person has reviewed it on some early stage and 10 revisions later still not given ones input again. What I'm trying to say here is that it does not make the change any less abandoned if it's not marked abandoned by the owner. It's rarely active process. Regarding the contributor experience, I'd say it's way more harmful not to mark abandoned changes abandoned than do so. If the person really don't know and can't figure out how to a) join the mailing list b) get to irc c) write a comment to the change or d) reach out anyone in the project in any other means to express that (s)he does not know how to fix the issue flagged in weeks, I'm not sure if we will miss that person as a contributor so much either? And yes, the message should be strong telling that the change has passed the point it most probably will have no traction anymore and active action needs to be taken to continue the workflow. Same time lets turn this around. How many new contributors we drive away because of the reaction Whoa, this many changes have been sitting here for weeks, I have no chance to get my change quickly in? Specifically to Nova, Swift Cinder folks: How much do you see benefit on bug lifecycle management with the abandoning? I would assume bugs that has message their proposed fix abandoned getting way more traction than the ones where the fix has been stale in queue for weeks. And how many of those abandoned ones gets reactivated? Last I'd like to point out that life is full of disappointments. We should not try to keep our community in bubble where no-one ever gets disappointed nor their feelings never gets hurt. I do not appreciate that approach on current trend of raising children and I definitely do not appreciate that approach towards adults. Perhaps the people with bad experience will learn something and get over it or move on. Neither is bad for the
Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?
Hello. I've talked with Christopher Yeoh yesterday and I've asked him about the microversions and when will they be able to merge. He said that for now, this commit had to get in before any other microversions: https://review.openstack.org/#/c/159767/ He also said that he'll double check everything, and if everything is fine, the first microversions should be getting in soon after. Best regards, Claudiu Belu From: Alexandre Levine [alev...@cloudscaling.com] Sent: Tuesday, March 03, 2015 4:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions? Bump. I'd really appreciate some answers to the question Sean asked. I still have the 2.4 in my review (the very one Sean mentioned) but it seems that it might not be the case. Best regards, Alex Levine On 3/2/15 2:30 PM, Sean Dague wrote: This change for the additional attributes for ec2 looks like it's basically ready to go, except it has the wrong microversion on it (as they anticipated the other changes landing ahead of them) - https://review.openstack.org/#/c/155853 What's the plan for merging the outstanding microversions? I believe we're all conceptually approved on all them, and it's an important part of actually moving forward on the new API. It seems like we're in a bit of a holding pattern on all of them right now, and I'd like to make sure we start merging them this week so that they have breathing space before the freeze. -Sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit tooling improvements(was Re: auto-abandon changesets considered harmful)
Sean Dague s...@dague.net writes: Right, I think this is the 'procedural -2' case, which feels like we need another state for things that are being held for procedural reasons, which is unrelated to normal code-review. We have been looking into that and believe we may be able to do something like that in Gerrit 2.10 using the work-in-progress plugin (to which we might be able to add another state). -Jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])
John Griffith john.griffi...@gmail.com writes: Should we just rename this thread to Sensitivity training for contributors? I do not think that only new contributors might feel it is negative. I think that both some new and long-time contributors do. My oldest patch is from July -- it's still relevant, just not important. I just picked up and finished a series of four patches that Jay Pipes left sitting for a month without updates -- but they are really good and should be merged. Yesterday, Monty went through and either cleaned up or abandoned his patches that had been sitting for several months. None of these things offend me or anyone else involved, and they are all enabled by the fact that those changes were not abandoned. And none of these people are new contributors (the opposite, in fact). If I had to deprioritize something I was working on and it was auto-abandoned, I would not find out. It would simply drop from my personal list of patches I am working on, and I would never think about it again. Until, one day perhaps, I see something and think hey, I thought I fixed that. And after a long time digging, I find a patch that someone else abandoned for me. I would be furious that they had made the decision on my behalf that I would do no more work on that. Not everyone's workflow is like this. Not everyone feels the same way about the word abandon. But we have a diversity of contributors in the community and we have a diversity of workflows. We need to help core reviewers focus on patches that are useful to them -- no doubt about it. We need to let new contributors know that we expect them to engage. We have better tools to do both than abandoning, which has negative impacts beyond the immediate purpose of its use. This thread has uncovered some information about how people use Gerrit and where we need to put effort to make sure patches are prioritized correctly. That is really useful information, and I hope other folks will chime in so we can make sure we cover all the bases. -Jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Gerrit tooling improvements(was Re: auto-abandon changesets considered harmful)
I feel the need to abandon changes that seem abandoned. I believe this has been covered to death now, so I'm going to shelve that conversation for a while, and talk about missing tooling in gerrit. One of the examples of something that was auto-abandoned wrongly was a patch on hold until some future development cycle (L1 in the case of nova patches, and cinder batches up certain types of code clean-up commits). So, one thing that is definitely missing from the tooling is some way of flagging such patches such that they *don't* get marked as abandoned, at least until some sensible amount of time after they were supposed to get picked back up. So, the semantics of abandonment *certainly don't fit* patches that are just on hold, but we don't have any way of tagging such patches. Is this something we can fix? On 2 March 2015 at 20:44, James E. Blair cor...@inaugust.com wrote: Duncan Thomas duncan.tho...@gmail.com writes: Why do you say auto-abandon is the wrong tool? I've no problem with the 1 week warning if somebody wants to implement it - I can see the value. A change-set that has been ignored for X weeks is pretty much the dictionary definition of abandoned, and restoring it is one mouse click. Maybe put something more verbose in the auto-abandon message than we have been, encouraging those who feel it shouldn't have been marked abandoned to restore it (and respond quicker in future) but other than that we seem to be using the right tool to my eyes Why do you feel the need to abandon changes submitted by other people? Is it because you have a list of changes to review, and they persist on that list? If so, let's work on making a better list for you. We have the tools. What query/page/list/etc are you looking at where you see changes that you don't want to see? -Jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] [Glance] Nomination for glance-stable-maint
Nikhil Komawar wrote: I would like to propose Zhi Yan Liu for the role of stable maintainer for Glance program. I'll reach out to Zhi Yan to make sure he is aware of the stable branch policy[1], then proceed to add him. [1] https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy Cheers, -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit tooling improvements(was Re: auto-abandon changesets considered harmful)
On 3 March 2015 at 17:23, Doug Hellmann d...@doughellmann.com wrote: Does the tool ignore patches with Workflow-1 set (work in progress)? So if it doesn't then we can easily change it to do so, however, I think a WIP progress patch that hasn't been updated for months counts as abandoned for any sensible purpose and so should be marked as such. Sean's comment about a 'procedural do not merge' is pretty bang on really, a -2 equivalent (preferably one that could be removed by any core, rather than having to go chase down a particular person to get it removed) would cover a lot of cases that currently get incorrectly hit by auto-abandon. -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy
You have to make sure it's in ENABLED_SERVICES in stackrc. It was removed by default, but then apparently restored via b9f2e25fa8afb2ea17a89ed76c4fac03689b5f07, so if you have that commit, you should be good. Otherwise, you can either add it to the ENABLED_SERVICES variable (either in stackrc, if in localrc if you've already overridden it there), or just call `enable_service n-novnc` in localrc. Best Regards, Solly Ross - Original Message - From: Chen Li chen...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Monday, March 2, 2015 9:59:27 PM Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy Sorry, what you mean Double-check no make sure that it's enabled ? I do set the following in my local.conf: enable_service n-nonvc NOVA_VNC_ENABLED=True NOVNCPROXY_URL=http://192.168.6.91:6080/vnc_auto.html; VNCSERVER_LISTEN=0.0.0.0 VNCSERVER_PROXYCLIENT_ADDRESS=192.168.6.91 Also, I tried to install package novnc python-novnc by apt-get install. Then I re-run ./stack.sh, the devstack installation failed, and complaining about the version for module six is wrong. In order to make my devstack work again, I removed the 2 packages, but devstack installation still failed due to the same issue. Thanks. -chen -Original Message- From: Solly Ross [mailto:sr...@redhat.com] Sent: Tuesday, March 03, 2015 12:52 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy Double-check no make sure that it's enabled. A couple months ago, noVNC got removed from the standard install because devstack was installing it from GitHub. - Original Message - From: Chen Li chen...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Sunday, March 1, 2015 7:14:51 PM Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy That's' the most confusing part. I don't even have a log for service nova-novncproxy. Thanks. -chen -Original Message- From: Kashyap Chamarthy [mailto:kcham...@redhat.com] Sent: Monday, March 02, 2015 12:16 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy On Sat, Feb 28, 2015 at 06:20:54AM +, Li, Chen wrote: Hi all, I'm trying to install a fresh all-in-one openstack environment by devstack. After the installation, all services looks well, but I can't open instance console in Horizon. I did a little check, and found service nova-novncproxy was not started ! What do you see in your 'screen-n-vnc.log' (I guess) log? I don't normally run Horizon or nova-vncproxy (only n-cpu, n-sch, n-cond), these are the ENABLED_SERVICES in my minimal DevStack config (Nova, Neutron, Keystone and Glance): ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit ,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta [1] https://kashyapc.fedorapeople.org/virt/openstack/2-minimal_devstack_lo calrc.conf Anyone has idea why this happened ? Here is my local.conf : http://paste.openstack.org/show/183344/ My os is: Ubuntu 14.04 trusty 3.13.0-24-generic -- /kashyap __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] oslo.config 1.8.0 release
The Oslo team is pleased to announce the release of: oslo.config 1.8.0: Oslo Configuration API For more details, please see the git log history below and: http://launchpad.net/oslo/+milestone/1.8.0 Please report issues through launchpad: http://bugs.launchpad.net/oslo Changes in oslo.config 1.7.0..1.8.0 --- b392cf1 Add exception handling for entry points be0864a Add expose_opt to CfgFilter Diffstat (except docs and test files) - oslo_config/cfgfilter.py| 51 - oslo_config/generator.py| 12 ++--- 5 files changed, 145 insertions(+), 10 deletions(-) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] [Glance] Nomination for glance-stable-maint
Thank you, Thierry! -Nikhil From: Thierry Carrez thie...@openstack.org Sent: Tuesday, March 3, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [stable] [Glance] Nomination for glance-stable-maint Nikhil Komawar wrote: I would like to propose Zhi Yan Liu for the role of stable maintainer for Glance program. I'll reach out to Zhi Yan to make sure he is aware of the stable branch policy[1], then proceed to add him. [1] https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy Cheers, -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] cliff 1.10.0 release
The Oslo team is pleased to announce the release of: cliff 1.10.0: Command Line Interface Formulation Framework For more details, please see the git log history below and: https://launchpad.net/python-cliff/+milestone/1.10.0 Please report issues through launchpad: https://bugs.launchpad.net/python-cliff Changes in cliff 1.9.0..1.10.0 -- 9d37798 Allow to call initialize_app when running --help 08ed6b4 Hide prompt in batch/pipe mode 1f21e0f Fix pep8 tests for lambda 01c8d8f Updated from global requirements f72da89 Fix git repo urls in tox.ini da46b64 Add deprecated attribute to commands b6294db Workflow documentation is now in infra-manual Diffstat (except docs and test files) - CONTRIBUTING.rst | 2 +- cliff/app.py | 39 +++ cliff/command.py | 3 +++ cliff/help.py | 2 ++ cliff/interactive.py | 7 ++- requirements.txt | 2 +- tox.ini | 8 11 files changed, 101 insertions(+), 26 deletions(-) Requirements updates diff --git a/requirements.txt b/requirements.txt index bf06e82..754a591 100644 --- a/requirements.txt +++ b/requirements.txt @@ -9 +9 @@ pyparsing=2.0.1 -six=1.7.0 +six=1.9.0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Hyper-V Meeting
Hi everyone, We are replacing our CI network infrastructure today.Therefor it is necessary for us to postpone the Hyper-V meeting until next week. p Peter J. Pouliot CISSP Microsoft Enterprise Cloud Solutions C:\OpenStack New England Research Development Center 1 Memorial Drive Cambridge, MA 02142 P: 1.(857).4536436 E: ppoul...@microsoft.commailto:ppoul...@microsoft.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle
James Bottomley wrote: Actually, this is possible: look at Linux, it freezes for 10 weeks of a 12 month release cycle (or 6 weeks of an 8 week one). More on this below. I'd be careful with comparisons with the Linux kernel. First it's a single bit of software, not a collection of interconnected projects. Second it's at a very different evolution/maturity point (20 years old vs. 0-4 years old for OpenStack projects). Finally it sits at a different layer, so there is less need for documentation/translations to be shipped with the software release. The only comparable project in terms of evolution/maturity in the OpenStack world would be Swift, and it happily produces releases every ~2months with a 1-week stabilisation period. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] testing implementation-specific features not covered by OpenStack APIs
As we know Tempest provides many great tests for verification of conformance with OpenStack interfaces - the tempest/api directory is full of such useful stuff. However, regarding the #1422728 ticket [1] (dependency on private HTTP header of Swift), I think we all need to answer for one single but fundamental question: which interfaces we truly want to test? I see two options: 1) implementation-specific private interfaces (like the Swift interface), 2) well-specified and public OpenStack APIs (eg. the Object Storage API v1 [2]). I think that Tempest should not relay on any behaviour not specified in public API (Object Storage API v1 in this case). Test for Swift- specific features/extensions is better be shipped along with Swift and actually it already has pretty good internal test coverage. As I already wrote in similar thread regarding Horizon, from my perspective, the OpenStack is much more than yet another IaaS/PaaS implementation or a bunch of currently developed components. I think its main goal is to specify a universal set of APIs covering all functional areas relevant for cloud computing, and to place that set of APIs in front as many implementations as possible. Having an open source reference implementation of a particular API is required to prove its viability, but is secondary to having an open and documented API. I am sure the same idea of interoperability should stand behind Tempest - the OpenStack's Test Suite. Regards, Radoslaw Zarzynski [1] https://bugs.launchpad.net/tempest/+bug/1422728 [2] http://developer.openstack.org/api-ref-objectstorage-v1.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)
On Tue, Mar 03, 2015 at 09:53:23AM +0100, Miguel Ángel Ajo wrote: https://review.openstack.org/#/c/159840/1/doc/source/testing/openflow-firewall.rst I may need some help from the OVS experts to answer the questions from henry.hly. Ben, Thomas, could you please? (let me know if you are not registered to the openstack review system, I could answer in your name). I added a comment. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] Core nominations.
If it was not clear in my previous message, I would like to again emphasize that I truly appreciate the vigor and intent behind Flavio's proposal. We need to be proactive and keep making the community better in such regards. However, at the same time we need to act fairly, with patience and have a friendly strategy for doing the same (thus maintaining a good balance in our progress). I should probably respond to another thread on ML mentioning my opinion that the community's success depends on trust and empathy and everyone's intent as well as effort in maintaining these principles. Without them, it will not take very long to make the situation chaotic. The questions I poised are still unanswered: There are a few members who have been relatively inactive this cycle in terms of reviews and have been missed in Flavio's list (That list is not comprehensive). On what basis have some of them been missed out and if we do not have strong reason, are we being fair? Again, I would like to emphasize that, cleaning of the list in such proportions at this point of time does NOT look OK strategy to me. To answer your concerns: (Why was this not proposed earlier in the cycle?) There are multiple reasons which are hard to be put in words because they are subtle and guided the momentum at that point of time. I agree that this was indeed proposed in the beginning of K cycle however, with less enthusiasm from all the members. Only a select few were insistent and democratically, it got deprioritized. There were other major concerns to be handled like position of the community members on some of the newer features. That in turn would have guided, the commitment from some of the members to the Glance program (probably based on their other priorities). Hence, I think coming with a good plan during the feature freeze period including when and how are we going to implement it, when would be a final draft of cores to be rotated be published, etc. questions would be answered with _patience_ and input from other cores. We would have a plan in K so, that WOULD be a step forward as discussed in the beginning and be implemented in L, ensuring out empathetic stand. The essence of the matter is: We need to change the dynamics slowly and with patience for maintaining a good balance. Best, -Nikhil From: Kuvaja, Erno kuv...@hp.com Sent: Tuesday, March 3, 2015 9:48 AM To: OpenStack Development Mailing List (not for usage questions); Daniel P. Berrange Cc: krag...@gmail.com Subject: Re: [openstack-dev] [Glance] Core nominations. Nikhil, If I recall correctly this matter was discussed last time at the start of the L-cycle and at that time we agreed to see if there is change of pattern to later of the cycle. There has not been one and I do not see reason to postpone this again, just for the courtesy of it in the hopes some of our older cores happens to make review or two. I think Flavio’s proposal combined with the new members would be the right way to reinforce to momentum we’ve gained in Glance over past few months. I think it’s also the right message to send out for the new cores (including you and myself ;) ) that activity is the key to maintain such status. - Erno From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com] Sent: 03 March 2015 04:47 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage questions) Cc: krag...@gmail.com Subject: Re: [openstack-dev] [Glance] Core nominations. Hi all, After having thoroughly thought about the proposed rotation and evaluating the pros and cons of the same at this point of time, I would like to make an alternate proposal. New Proposal: 1. We should go ahead with adding more core members now. 2. Come up with a plan and give additional notice for the rotation. Get it implemented one month into Liberty. Reasoning: Traditionally, Glance program did not implement rotation. This was probably with good reason as the program was small and the developers were working closely together and were aware of each others' daily activities. If we go ahead with this rotation it would be implemented for the first time and would appear to have happened out-of-the-blue. It would be good for us to make a modest attempt at maintaining the friendly nature of the Glance development team, give them additional notice and preferably send them a common email informing the same. We should propose at least a tentative plan for rotation so that all the other core members are aware of their responsibilities. This brings to my questions, is the poposed list for rotation comprehensive? What is the basis for missing out some of them? What would be a fair policy or some level of determinism in expectations? I believe that we should have input from the general Glance community (and the OpenStack community too) for the same. In order for all this to be sorted out, I kindly request
Re: [openstack-dev] [Glance] Core nominations.
So I’m +1 on giving these cores notice and perhaps voting on their removal separately (as was recently done in another project or two). Perhaps the way to compromise here would be to submit a change to the relevant documentation in Glance outlining when a core can be removed (or when the proposal to remove them can be made) including the fact that existing cores are not exempt. Detailing the process will be good for everyone, existing active cores, existing inactive cores, and new cores (like myself). It will also give us the ability to say to a new core “please keep these guidelines in mind and understand that if your priorities change, we will remove you (without malice) to keep the core list concise and to keep momentum at the desired level.” I have no issue discussing this proposal and having it land for Liberty. I also do not see a reason why we shouldn’t reach out to those existing inactive cores to explain to them the situation and ask them if they would rather remove themselves before we vote to have them removed. I don’t think any of them will disagree that they haven’t done the work necessary to maintain core-status during Kilo. To answer Nikhil’s questions: I think Flavio picked out a handful of people as an example. I doubt he deliberately skipped over some for any reason other than wishing to reply quickly. Is there a list of people that have done little to no reviews and submitted few if not any changes during Kilo that are currently cores? If so, we should be answering the following: - Who are they? - How many of them are there? - If they’re already inactive, how would removing them hurt forward progress in Glance? - Has anyone reached out to them individually? I agree we should move forward in a way that will not alienate people, and in a way that not only appears fair to everyone else, but is fair. I would hope none of these people would take remove personally since Glance’s continued momentum and progress is not a personal reflection on any of them (it should only reflect those who are actively participating in the project). On 3/3/15, 10:10, Nikhil Komawar nikhil.koma...@rackspace.com wrote: If it was not clear in my previous message, I would like to again emphasize that I truly appreciate the vigor and intent behind Flavio's proposal. We need to be proactive and keep making the community better in such regards. However, at the same time we need to act fairly, with patience and have a friendly strategy for doing the same (thus maintaining a good balance in our progress). I should probably respond to another thread on ML mentioning my opinion that the community's success depends on trust and empathy and everyone's intent as well as effort in maintaining these principles. Without them, it will not take very long to make the situation chaotic. The questions I poised are still unanswered: There are a few members who have been relatively inactive this cycle in terms of reviews and have been missed in Flavio's list (That list is not comprehensive). On what basis have some of them been missed out and if we do not have strong reason, are we being fair? Again, I would like to emphasize that, cleaning of the list in such proportions at this point of time does NOT look OK strategy to me. To answer your concerns: (Why was this not proposed earlier in the cycle?) There are multiple reasons which are hard to be put in words because they are subtle and guided the momentum at that point of time. I agree that this was indeed proposed in the beginning of K cycle however, with less enthusiasm from all the members. Only a select few were insistent and democratically, it got deprioritized. There were other major concerns to be handled like position of the community members on some of the newer features. That in turn would have guided, the commitment from some of the members to the Glance program (probably based on their other priorities). Hence, I think coming with a good plan during the feature freeze period including when and how are we going to implement it, when would be a final draft of cores to be rotated be published, etc. questions would be answered with _patience_ and input from other cores. We would have a plan in K so, that WOULD be a step forward as discussed in the beginning and be implemented in L, ensuring out empathetic stand. The essence of the matter is: We need to change the dynamics slowly and with patience for maintaining a good balance. Best, -Nikhil From: Kuvaja, Erno kuv...@hp.com Sent: Tuesday, March 3, 2015 9:48 AM To: OpenStack Development Mailing List (not for usage questions); Daniel P. Berrange Cc: krag...@gmail.com Subject: Re: [openstack-dev] [Glance] Core nominations. Nikhil, If I recall correctly this matter was discussed last time at the start of the L-cycle and at that time we agreed to see if there is change of pattern to later of the cycle. There has not been one and I do not see reason to
Re: [openstack-dev] Fw: [nova] Queries regarding how to run test cases of python-client in juno release
On 2/27/2015 1:44 AM, Rattenpal Amandeep wrote: FYI -Forwarded by Rattenpal Amandeep/DEL/TCS on 02/27/2015 01:14PM - To: openstack-dev@lists.openstack.org From: Rattenpal Amandeep/DEL/TCS Date: 02/25/2015 11:43AM Subject: [openstack-dev] [python-novaclient] [python-client] Queries regarding how to run test cases of python-client in juno release Hi I am unable to find script for run test cases of python-clients in juno release. novaclients are shilfed into dist-packages but there is no script for run the test cases. Please help me to come out from this problem. Thanks, Regards Amandeep Rattenpal Asst. System Engineer, Mail to: rattenpal.amand...@tcs.com Web site: www.tcs.com =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Simple, don't do development from distro packages since they might remove the tests since those aren't needed in production. If you want to run tests, do a development workflow: 1. git clone git://git.openstack.org/openstack/python-novaclient 2. cd python-novaclient 3. tox -r -e py27,pep8 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] Reminder: FPF at Mar 5
Hi Sahara folks, FPF [1] will be at Mar 5 and it means that all CRs should be proposed for features and all new features related CRs will need FPF exception to be merged. [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] oslosphinx 2.5.0 release
The Oslo team is pleased to announce the release of: oslosphinx 2.5.0: OpenStack Sphinx Extensions and Theme For more details, please see the git log history below and: http://launchpad.net/oslosphinx/+milestone/2.5.0 Please report issues through launchpad: http://bugs.launchpad.net/oslosphinx Changes in oslosphinx 2.4.0..2.5.0 -- e1b0dca Speed up blueprint checking with naming convention ccf8e1a Update run_cross_tests.sh to latest Diffstat (except docs and test files) - oslosphinx/check_blueprints.py | 26 +++--- 2 files changed, 36 insertions(+), 7 deletions(-) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] On Object placement
Hi Christian, Sorry for the slow response. I was looking into the feasibility of your suggestion for Sahara in particular and it took a bit. On 2/19/15, 2:46 AM, Christian Schwede christian.schw...@enovance.com wrote: Hello Jonathan, On 18.02.15 18:13, Halterman, Jonathan wrote: 1. Swift should allow authorized services to place a given number of object replicas onto a particular rack, and onto separate racks. This is already possible if you use zones and regions in your ring files. For example, if you have 2 racks, you could assign one zone to each of them and Swift places at least one replica on each rack. Because Swift takes care of the device weight you could also ensure that a specific rack gets two copies, and another rack only one. Presumably a deployment would/should match the DC layout, where racks could correspond to Azs. yes, that makes a lot of sense (to assign zones to racks), because in this case you can ensure that there aren't multiple replicas stored within the same rack. You can still access your data if a rack goes down (power, network, maintenance). However, this is only true as long as all primary nodes are accessible. If Swift stores data on a handoff node this data might be written to a different node first, and moved to the primary node later on. Note that placing objects on other than the primary nodes (for example using an authorized service you described) will only store the data on these nodes until the replicator moves the data to the primary nodes described by the ring. As far as I can see there is no way to ensure that an authorized service can decide where to place data, and that this data stays on the selected nodes. That would require a fundamental change within Swift. So - how can we influence where data is stored? In terms of placement based on a hash ring, I¹m thinking of perhaps restricting the placement of an object to a subset of the ring based on a zone. We can still hash an object somewhere on the ring, for the purposes of controlling locality, we just want it to be within (or without) a particular zone. Any ideas? You can't (at least not from the client side). The ring determines the placement and if you have more zones (or regions) than replicas you can't ensure an object replica is stored within a determined rack. Even if you store it on a handoff node it will be moved to the primary node sooner or later. Determining that an object is stored in a specific zone is not possible with the current architecture; you can only discover in which zone it will be placed finally (based on the ring). What you could do (especially if you have more racks than replicas) is to use storage policies and only assign three racks to each policy, and splitting them into three zones (if you store three replicas). For example, let's assume you have 5 racks, then you create 5 storage policies (SP) with the following assignment: Rack SP 1 2 3 4 5 0 x x x 1 x x x 2 x x x 3 x x x 4 x x x Doing this you can ensure the following: - Data is distributed somehow evenly across the cluster (if you use the storage policies also evenly distributed) - From a given SP you can ensure that a replica is stored in a specific rack; and because a SP is assigned to a container you can determine the SP based on the container metadata (name SP0 rack_1_2_3 and so on to make it even more simpler for the application to determine the racks). That could help in your case? While this wouldn’t give us all the control we need (2 replicas on 1 rack, 1 replica on another rack), ensuring at least 1 copy winds up on a particular rack is part way there. With the way that Swift’s placement works, are the other replicas likely to end up on different racks? Where this might not work is for services that need to control rack locality and allow users to select the containers that data is placed in. This is currently the case with Sahara. 2. Swift should allow authorized services and administrators to learn which racks an object resides on, along with endpoints. You already mentioned the endpoint middleware, though it is currently not protected and unauthenticated access is allowed if enabled. This is good to know. We still need to learn which rack an object resides on though. This information is important in determining whether a swift object resides on the same rack as a VM. Well, that information is available using the /endpoint middleware? You know the server IPs in a rack, and compare that to the output from the endpoint middleware. We don’t actually know the server IPs in a rack though, and collecting and maintaining this host-rack information is something we’d like to avoid having various individual services do. Currently Sahara does collect this information, but
Re: [openstack-dev] Python 3 is dead, long live Python 3
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 02/02/2015 05:15 PM, Jeremy Stanley wrote: After a long wait and much testing, we've merged a change[1] which moves the remainder of Python 3.3 based jobs to Python 3.4. This is primarily in service of getting rid of the custom workers we implemented to perform 3.3 testing more than a year ago, since we can now run 3.4 tests on normal Ubuntu Trusty workers (with the exception of a couple bugs[2][3] which have caused us to temporarily suspend[4] Py3K jobs for oslo.messaging and oslo.rootwrap). I've personally tested `tox -e py34` on every project hosted in our infrastructure which was gating on Python 3.3 jobs and they all still work, so you shouldn't see any issues arise from this change. If you do, however, please let the Infrastructure team know about it as soon as possible. Thanks! [1] https://review.openstack.org/151713 [2] https://launchpad.net/bugs/1367907 [3] https://launchpad.net/bugs/1382607 [4] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html The switch broke Icehouse stable branch for oslo-incubator [1] since those jobs run on Precise and not Trusty. Anyone has ideas how to fix it? [1]: https://review.openstack.org/#/c/136718/ /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJU9a+VAAoJEC5aWaUY1u57mv4H/0Wqi986LUPYzQCQCzcvHlAv Uomd8cvWNYBUzLJjV2r3xrgaKDVsKtJI+vcMllBNH7oigRHXDo6RrkUoV+4jSf4o yzYtU9CXLO/vKuTnJVzsp3xCuu9XI9mE19FHWLYOAhpSFXNg4J6u94yKRIxxcs6H IAaJEuhcJigm7qK10iKESYvw9AxJjZsHaq0No5KsAT+T5FTmfGZ2cbPfkKSo9NgM Zl0gbPTQPSoB8EvefoP8uaUYF1sD+Itgab1GvI6B9sRnkb+f1uaWAA852SaxiA1D Z5IQOwYCPteBJ1ztSrFAQGw8nfgp8H0I3aHwQ/7fgdxPPb8Eqa/wWHlfUnt0nG4= =ZGSJ -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat][Mistral] How to use Mistral resources in Heat
Oooh, ok! Yeah, additional tests for that would help us clearly see that solved. Thanks! Renat Akhmerov @ Mirantis Inc. On 03 Mar 2015, at 18:03, Peter Razumovsky prazumov...@mirantis.com wrote: Hi Renat, We solved problem with referencing. I'll add more tests for understanding, how Mistral Workflow resource look like in Heat template. 2015-03-03 14:35 GMT+03:00 Renat Akhmerov rakhme...@mirantis.com mailto:rakhme...@mirantis.com: Hi Peter, Thanks for sharing this.. Overall it looks good to me, I just left a couple of comments/questions in https://review.openstack.org/#/c/147645/17/contrib/heat_mistral/heat_mistral/tests/test_workflow.py https://review.openstack.org/#/c/147645/17/contrib/heat_mistral/heat_mistral/tests/test_workflow.py Could you please take a look? We keep in mind that one thing is still not implemented: workflow references get broken if we upload a workflow which calls another workflow. We need to discuss the best way to deal with that. Either we need to do in Heat itself to restore those references accounting for stack name etc.. Or we need to provide some facility in Mistral itself. Other than that it looks to be ready to start gathering Heat folks’ feedback. Renat Akhmerov @ Mirantis Inc. On 26 Feb 2015, at 17:49, Peter Razumovsky prazumov...@mirantis.com mailto:prazumov...@mirantis.com wrote: In anticipation of Mistral supporting in Heat, let's introduce using Mistral in Heat. 1. For using Mistral resources workflow and cron-trigger in Heat, Mistral must be installed to DevStack. Installation guide for DevStack on https://github.com/stackforge/mistral/tree/master/contrib/devstack https://github.com/stackforge/mistral/tree/master/contrib/devstack Detailed information about Workflow and CronTrigger you can find on https://wiki.openstack.org/wiki/Mistral https://wiki.openstack.org/wiki/Mistral Note, that currently Mistral use DSLv2 (https://wiki.openstack.org/wiki/Mistral/DSLv2 https://wiki.openstack.org/wiki/Mistral/DSLv2) and Rest API v2 (https://wiki.openstack.org/wiki/Mistral/RestAPIv2 https://wiki.openstack.org/wiki/Mistral/RestAPIv2). 2. When Mistral will be installed, check it accessibility - in screen or using command 'mistral --help' (list of commands). You can test Mistral resources creating workflow resources with DSLv2-formatted definitions, cron-triggers and executions. For example, command 'mistral workflow-list' gives the table: Starting new HTTP connection (1): 192.168.122.104 Starting new HTTP connection (1): localhost +-++--+-++ | Name| Tags | Input| Created at | Updated at | +-++--+-++ | std.create_instance | none | name, image_id, flavor_id... | 2015-01-27 14:16:21 | None | | std.delete_instance | none | instance_id | 2015-01-27 14:16:21 | None | +-++--+-++ 3. Mistral resources for Heat you can find there: https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z 4. Simple templates using Mistral resources in Heat templates you can find there: https://wiki.openstack.org/wiki/Heat_Mistral_resources_usage_examples https://wiki.openstack.org/wiki/Heat_Mistral_resources_usage_examples __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
[openstack-dev] [glance] [trove] [heat] [nova] [all] Handling forwarded requests
Hey all, It appears that currently a number of OpenStack services are not generating version catalogs correctly when the service sits behind a proxy. (Reference: https://bugs.launchpad.net/glance/+bug/1384379) Glance already has a fix that was accepted for kilo-1 but it is suboptimal and assumes there’s only one proxy-address that will forward the request (which obviously is not true in every case). There exists RFC 7239 (http://tools.ietf.org/html/rfc7239) which is recent but defines a “Forwarded” header with explicit parameters that we should be using. There are also the defacto standards of X-Forwarded-By and X-Forwarded-Host that we should be inspecting as well. Currently, Glance’s solution is being copied over to other projects (Trove, Heat, Nova) and it is clearly suboptimal. I’m going to work on a more general solution for this, but if anyone can hammer one out faster, don’t be afraid to submit it. This is absolutely a bug that should be a high priority for us. Cheers, Ian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA] testing implementation-specific features not covered by OpenStack APIs
On 03/03/2015 11:28 AM, Radoslaw Zarzynski wrote: As we know Tempest provides many great tests for verification of conformance with OpenStack interfaces - the tempest/api directory is full of such useful stuff. However, regarding the #1422728 ticket [1] (dependency on private HTTP header of Swift), I think we all need to answer for one single but fundamental question: which interfaces we truly want to test? I see two options: 1) implementation-specific private interfaces (like the Swift interface), 2) well-specified and public OpenStack APIs (eg. the Object Storage API v1 [2]). As Jordan said, these two are one and the same. One could imagine a situation where there was an abstract object storage api and swift was an implementation, but that view has been rejected by the OpenStack community many times (thought not without some controversy). I think that Tempest should not relay on any behaviour not specified in public API (Object Storage API v1 in this case). Test for Swift- specific features/extensions is better be shipped along with Swift and actually it already has pretty good internal test coverage. I agree, depending on what specified means. Lack of adequate documentation should not be equated with being unspecified for the purpose of determining test coverage criteria. This is partly addressed in the api stability document https://wiki.openstack.org/wiki/APIChangeGuidelines under /*The existing API is not well documented*/ As I already wrote in similar thread regarding Horizon, from my perspective, the OpenStack is much more than yet another IaaS/PaaS implementation or a bunch of currently developed components. I think its main goal is to specify a universal set of APIs covering all functional areas relevant for cloud computing, and to place that set of APIs in front as many implementations as possible. Having an open source reference implementation of a particular API is required to prove its viability, but is secondary to having an open and documented API. I am sure the same idea of interoperability should stand behind Tempest - the OpenStack's Test Suite. The community has (thus far) rejected the notion that our code is a reference implementation for an abstract api. But yes, tempest is supposed to be able to run against any OpenStack (TM?) cloud. -David Regards, Radoslaw Zarzynski [1] https://bugs.launchpad.net/tempest/+bug/1422728 [2] http://developer.openstack.org/api-ref-objectstorage-v1.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit tooling improvements
On Tue, 2015-03-03 at 17:10 +0200, Duncan Thomas wrote: I feel the need to abandon changes that seem abandoned I think there is an agreement that there should be a way to have a clean view of changesets that are being actively worked on, changes where the owner is responding to comments, working on it, rebasing as needed, or waiting for votes/comments. For contributors, including the slow ones, I think nobody would object that it's also good for them to login on https://review.openstack.org/ and see the list of their changesets, with votes and comments up where they left them (instead of in the Recently closed). So I think we go back to tooling: hopefully new version of gerrit will have a better way to distinguish 'abandoned' from 'inactive'. Until then, what other options do we have? /stef __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] Decision related to cinder metadata
Hi Stackers, I am referring to one of the action item related to metadata discussed here; http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html Can some help with the final take away (Sorry I could not find any thread related to its decision after this meeting). From the conversation I got a feel that supporting operations via metadata needs to avoided / corrected, and the operations / variations needs to be provided via volume_types. Is this understanding correct ? Regards, Sasi__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit tooling improvements
Is common sense an option here? More specifically I mean leveraging the common sense of both contributors and core reviewers (or whoever is authorized to abandon patches). The formers should abandon patches they're not working on anymore, and expect somebody else to do that if the patches they own end up in a stale condition (where the definition of stale might be relative); the latters should be very considerate in using this function, using it sparingly and always providing justification, perhaps not auto generated. Salvatore On 3 March 2015 at 19:49, Stefano Maffulli stef...@openstack.org wrote: On Tue, 2015-03-03 at 17:10 +0200, Duncan Thomas wrote: I feel the need to abandon changes that seem abandoned I think there is an agreement that there should be a way to have a clean view of changesets that are being actively worked on, changes where the owner is responding to comments, working on it, rebasing as needed, or waiting for votes/comments. For contributors, including the slow ones, I think nobody would object that it's also good for them to login on https://review.openstack.org/ and see the list of their changesets, with votes and comments up where they left them (instead of in the Recently closed). So I think we go back to tooling: hopefully new version of gerrit will have a better way to distinguish 'abandoned' from 'inactive'. Until then, what other options do we have? /stef __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] Ceph native driver for manila
On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon lpa...@redhat.com wrote: What is the status on virtfs? I am not sure if it is being maintained. Does anyone know? The last i knew its not maintained. Also for what its worth, p9 won't work for windows guest (unless there is a p9 driver for windows ?) if that is part of your usecase/scenario ? Last but not the least, p9/virtfs would expose a p9 mount , not a ceph mount to VMs, which means if there are cephfs specific mount options they may not work - Luis - Original Message - From: Danny Al-Gaaf danny.al-g...@bisect.de To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org Sent: Sunday, March 1, 2015 9:07:36 AM Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila Am 27.02.2015 um 01:04 schrieb Sage Weil: [sorry for ceph-devel double-post, forgot to include openstack-dev] Hi everyone, The online Ceph Developer Summit is next week[1] and among other things we'll be talking about how to support CephFS in Manila. At a high level, there are basically two paths: We discussed the CephFS Manila topic also on the last Manila Midcycle Meetup (Kilo) [1][2] 2) Native CephFS driver As I currently understand it, - The driver will set up CephFS auth credentials so that the guest VM can mount CephFS directly - The guest VM will need access to the Ceph network. That makes this mainly interesting for private clouds and trusted environments. - The guest is responsible for running 'mount -t ceph ...'. - I'm not sure how we provide the auth credential to the user/guest... The auth credentials need to be handled currently by a application orchestration solution I guess. I see currently no solution on the Manila layer level atm. There were some discussion in the past in Manila community on guest auto mount but i guess nothing was conclusive there. Appln orchestration can be achived by having tenant specific VM images with creds pre-loaded or have the creds injected via cloud-init too should work ? If Ceph would provide OpenStack Keystone authentication for rados/cephfs instead of CephX, it could be handled via app orch easily. This would perform better than an NFS gateway, but there are several gaps on the security side that make this unusable currently in an untrusted environment: - The CephFS MDS auth credentials currently are _very_ basic. As in, binary: can this host mount or it cannot. We have the auth cap string parsing in place to restrict to a subdirectory (e.g., this tenant can only mount /tenants/foo), but the MDS does not enforce this yet. [medium project to add that] - The same credential could be used directly via librados to access the data pool directly, regardless of what the MDS has to say about the namespace. There are two ways around this: 1- Give each tenant a separate rados pool. This works today. You'd set a directory policy that puts all files created in that subdirectory in that tenant's pool, then only let the client access those rados pools. 1a- We currently lack an MDS auth capability that restricts which clients get to change that policy. [small project] 2- Extend the MDS file layouts to use the rados namespaces so that users can be separated within the same rados pool. [Medium project] 3- Something fancy with MDS-generated capabilities specifying which rados objects clients get to read. This probably falls in the category of research, although there are some papers we've seen that look promising. [big project] Anyway, this leads to a few questions: - Who is interested in using Manila to attach CephFS to guest VMs? I didn't get this question... Goal of manila is to provision shared FS to VMs so everyone interested in using CephFS would be interested to attach ( 'guess you meant mount?) CephFS to VMs, no ? - What use cases are you interested? - How important is security in your environment? NFS-Ganesha based service VM approach (for network isolation) in Manila is still under works, afaik. As you know we (Deutsche Telekom) are may interested to provide shared filesystems via CephFS to VMs instead of e.g. via NFS. We can provide/discuss use cases at CDS. For us security is very critical, as the performance is too. The first solution via ganesha is not what we prefer (to use CephFS via p9 and NFS would not perform that well I guess). The second solution, to use CephFS directly to the VM would be a bad solution from the security point of view since we can't expose the Ceph public network directly to the VMs to prevent all the security issues we discussed already. Is there any place the security issues are captured for the case where VMs access CephFS directly ? I was curious to understand. IIUC Neutron provides private and public networks and for VMs to access external CephFS network, the
[openstack-dev] [Ironic] Weekly subteam status report
Hi, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Bugs (dtantsur) (As of Mon, 02 Mar 17:00 UTC) Open: 129 (-7) 3 new (-4), 32 in progress (-4), 0 critical, 18 high and 7 incomplete security bug found and fixed: https://bugs.launchpad.net/ironic/+bug/1425206 Drivers == IPA (jroll/JayF/JoshNang) -- iSCSI deploy driver can now use the agent image \o/ CI is a WIP iLO (wanyen) -- Need core reviewers to review UEFI secure boot https://blueprints.launchpad.net/ironic/+spec/uefi-secure-boot . Secure boot is iLO driver's top priority item. Your review will be much appreciated. Discover node properties with node-set-provision-state https://blueprints.launchpad.net/ironic/+spec/ironic-node-properties-discovery has landed iRMC (naohirot) - iRMC Management code has been merged. iRMC pxe-based deploy driver became available. iRMC Virtual media Deploy spec is solisited for core team's review and approval, code has been ready for review. Until next week, --ruby [0] https://etherpad.openstack.org/p/IronicWhiteBoard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*
Hello, I would vote for 2nd, but i also think that we can generate same information, on merge for example, that will be printed during first run and place it directly in repository (maybe even README?). I guess this is what your 3rd approach is about? So, can we go with both? On Tue, Mar 3, 2015 at 4:52 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! According to the refactoring plan [1] we are going to release the 6.1 version of python-fuelclient which is going to contain recent changes but will keep backwards compatibility with what was before. However, the next major release will bring users the fresh CLI that won’t be compatible with the old one and the new, actually usable IRL API library that also will be different. The issue this message is about is the fact that there is a strong need to let both CLI and API users about those changes. At the moment I can see 3 ways of resolving it: 1. Show deprecation warning for commands and parameters which are going to be different. Log deprecation warnings for deprecated library methods. The problem with this approach is that the structure of both CLI and the library will be changed, so deprecation warning will be raised for mostly every command for the whole release cycle. That does not look very user friendly, because users will have to run all commands with --quiet for the whole release cycle to mute deprecation warnings. 2. Show the list o the deprecated stuff and planned changes on the first run. Then mute it. The disadvantage of this approach if that there is a need of storing the info about the first run to a file. However, it may be cleaned after the upgrade. 3. The same as #2 but publish the warning online. I personally prefer #2, but I’d like to get more opinions on this topic. References: 1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] The future of Xen + OVS support
There have been a couple of patches [1] [2] proposed to neutron recently that could impact our support for Xen+OVS, but I don’t see an easy way for us to validate such changes. I’m not aware of a 3rd party CI job that runs against Xen, and I don’t know of any active contributors able to manually validate changes. Unless this changes, I think we should consider deprecating support for Xen in kilo. We can’t afford to block changes for fear of breaking something that nobody seems to care about. I’m hoping this post serves as a wake-up call to anyone that wants to see neutron maintain its support for Xen, and that they will be willing to devote resources to ensure that support for it continues. I’ve also added this issue to next week’s irc meeting [3]. Maru 1: https://review.openstack.org/#/c/148969/ 2: https://review.openstack.org/#/c/158805 3: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] The future of Xen + OVS support
We certainly care about this but converting the XenServer CI to Neutron+OVS has been hitting a number of issues (notably a few concurrency ones – although there are a few missing features) that we’ve been trying to sort through. I am certainly hoping that we’ll have everything stable enough to set up a CI specifically for this combination before the Kilo release and would be very keen not to deprecate XenAPI support in Kilo. Certainly even if we don’t make the Kilo release for the CI we will still be pushing for it in L. Bob From: Kevin Benton [mailto:blak...@gmail.com] Sent: 03 March 2015 21:06 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] The future of Xen + OVS support It might make sense to send this to the users list as well. There may be a large deployment out there with resources willing to at least test changes even if they don't have any upstream development resources. On Tue, Mar 3, 2015 at 12:50 PM, Maru Newby ma...@redhat.commailto:ma...@redhat.com wrote: There have been a couple of patches [1] [2] proposed to neutron recently that could impact our support for Xen+OVS, but I don’t see an easy way for us to validate such changes. I’m not aware of a 3rd party CI job that runs against Xen, and I don’t know of any active contributors able to manually validate changes. Unless this changes, I think we should consider deprecating support for Xen in kilo. We can’t afford to block changes for fear of breaking something that nobody seems to care about. I’m hoping this post serves as a wake-up call to anyone that wants to see neutron maintain its support for Xen, and that they will be willing to devote resources to ensure that support for it continues. I’ve also added this issue to next week’s irc meeting [3]. Maru 1: https://review.openstack.org/#/c/148969/ 2: https://review.openstack.org/#/c/158805 3: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api][all] - Openstack.error common library
On Feb 25, 2015, at 10:47 AM, Doug Hellmann d...@doughellmann.commailto:d...@doughellmann.com wrote: On Wed, Feb 25, 2015, at 09:33 AM, Eugeniya Kudryashova wrote: Hi, stackers! As was suggested in topic [1], using an HTTP header was a good solution for communicating common/standardized OpenStack API error codes. So I’d like to begin working on a common library, which will collect all openstack HTTP API errors, and assign them string error codes. My suggested name for library is openstack.error, but please feel free to propose something different. The other question is where we should allocate such project, in openstack or stackforge, or maybe oslo-incubator? I think such project will be too massive (due to dealing with lots and lots of exceptions) to allocate it as a part of oslo, so I propose developing the project on Stackforge and then eventually have it moved into the openstack/ code namespace when the other projects begin using the library. Let me know your feedback, please! I'm not sure a single library as a home to all of the various error messages is the right approach. I thought, based on re-reading the thread you link to, that the idea was to come up with a standard schema for error payloads and then let the projects fill in the details. We might need a library for utility functions, but that wouldn't actually include the error messages. Did I misunderstand? Doug After rereading this thread I came to the same conclusion as Doug did. There was more support for putting the errors in a standard json error payload” as Sean Dague originally suggested. [1] Regards, Everett [1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055570.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] The future of Xen + OVS support
It might make sense to send this to the users list as well. There may be a large deployment out there with resources willing to at least test changes even if they don't have any upstream development resources. On Tue, Mar 3, 2015 at 12:50 PM, Maru Newby ma...@redhat.com wrote: There have been a couple of patches [1] [2] proposed to neutron recently that could impact our support for Xen+OVS, but I don’t see an easy way for us to validate such changes. I’m not aware of a 3rd party CI job that runs against Xen, and I don’t know of any active contributors able to manually validate changes. Unless this changes, I think we should consider deprecating support for Xen in kilo. We can’t afford to block changes for fear of breaking something that nobody seems to care about. I’m hoping this post serves as a wake-up call to anyone that wants to see neutron maintain its support for Xen, and that they will be willing to devote resources to ensure that support for it continues. I’ve also added this issue to next week’s irc meeting [3]. Maru 1: https://review.openstack.org/#/c/148969/ 2: https://review.openstack.org/#/c/158805 3: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Question about boot-from-volume instance and flavor
On 03/03/2015 01:10 AM, Rui Chen wrote: Hi all, When we boot instance from volume, we find some ambiguous description about flavor root_gb in operations guide, http://docs.openstack.org/openstack-ops/content/flavors.html /Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. / /The 0 size is a special case that uses the native base image size as the size of the ephemeral root volume./ / / 'You don't use it(root_gb) when you boot from a persistent volume.' It means that we need to set the root_gb to 0 or not? I don't know. Hi Rui, I agree the documentation -- and frankly, the code in Nova -- is confusing around this area. But I find out that the root_gb will been added into local_gb_used of compute_node so that it will impact the next scheduling. Think about a use case, the local_gb of compute_node is 10, boot instance from volume with the root_gb=5 flavor, in this case, I can only boot 2 boot-from-volume instances on the compute_nodes, although these instances don't use the local disk of compute_nodes. I find a patch that try to fix this issue, https://review.openstack.org/#/c/136284/ I want to know that which solution is better for you? Solution #1: boot instance from volume with the root_gb=0 flavor. Solution #2: add some special logic in order to correct the disk usage, like patch #136284 Solution #2 is a better idea, IMO. There should not be any magic setting for root_gb that needs to be interpreted both by the user and the Nova code base. The issue with the 136284 patch is that it is trying to address the problem in the wrong place, IMHO. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?
On 03/03/2015 06:55 AM, Joe Cropper wrote: On Mar 3, 2015, at 8:34 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: I’m pretty sure it has always done this: leave the host set on the final scheduling attempt. I agree that this could be cleared which would free up room for future scheduling attempts. Thanks Vish for the comment. Do we know if this is an intended feature or would we consider this a bug? It seems like we could free this up, as you said, to allow room for additional VMs, especially since we know it didn’t successfully deploy anyway? Seems like a bug to me. Feel free to create one in Launchpad and we'll get on it. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] The future of Xen + OVS support
On Mar 3, 2015, at 1:06 PM, Kevin Benton blak...@gmail.com wrote: It might make sense to send this to the users list as well. There may be a large deployment out there with resources willing to at least test changes even if they don't have any upstream development resources. Good call, I’ve cross-posted to the user list as you suggest. On Tue, Mar 3, 2015 at 12:50 PM, Maru Newby ma...@redhat.com wrote: There have been a couple of patches [1] [2] proposed to neutron recently that could impact our support for Xen+OVS, but I don’t see an easy way for us to validate such changes. I’m not aware of a 3rd party CI job that runs against Xen, and I don’t know of any active contributors able to manually validate changes. Unless this changes, I think we should consider deprecating support for Xen in kilo. We can’t afford to block changes for fear of breaking something that nobody seems to care about. I’m hoping this post serves as a wake-up call to anyone that wants to see neutron maintain its support for Xen, and that they will be willing to devote resources to ensure that support for it continues. I’ve also added this issue to next week’s irc meeting [3]. Maru 1: https://review.openstack.org/#/c/148969/ 2: https://review.openstack.org/#/c/158805 3: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Need help in configuring keystone
Hi Marek, I tried with the auto-generated shibboleth2.xml, just added the application override attribute, now im stuck with looping issue, when i access v3/OS-FEDERATION/identity_providers/idp_2/protocols/saml2/auth for the first time it is prompting for username and password once provided it goes on loop. i could see session generated https://115.112.68.53:5000/Shibboleth.sso/SessionMiscellaneous Client Address: 121.243.33.212 Identity Provider: https://idp.testshib.org/idp/shibboleth SSO Protocol: urn:oasis:names:tc:SAML:2.0:protocol Authentication Time: 2015-03-04T06:44:41.625Z Authentication Context Class: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Authentication Context Decl: (none) Session Expiration (barring inactivity): 479 minute(s) Attributes affiliation: mem...@testshib.org;st...@testshib.org entitlement: urn:mace:dir:entitlement:common-lib-terms eppn: mys...@testshib.org persistent-id: https://idp.testshib.org/idp/shibboleth!https://115.112.68.53/shibboleth!4Q6X4dS2MRhgTZOPTuL9ubMAcIM= unscoped-affiliation: Member;Staffhere are my config files,SPConfig xmlns=urn:mace:shibboleth:2.0:native:sp:config xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata clockSkew=1800 ApplicationDefaults entityID=https://115.112.68.53/shibboleth; REMOTE_USER=eppnSessions lifetime=28800 timeout=3600 checkAddress=false relayState=ss:mem handlerSSL=true handlerSSL=true cookieProps=; path=/; secure SSO entityID=https://idp.testshib.org/idp/shibboleth; SAML2 SAML1/SSO LogoutSAML2 Local/Logout Handler type=MetadataGenerator Location=/Metadata signing=false/Handler type=Status Location=/Status/ Handler type=Session Location=/Session showAttributeValues=true/ Handler type=DiscoveryFeed Location=/DiscoFeed//Sessions Errors supportContact=root@localhost logoLocation=/shibboleth-sp/logo.jpg styleSheet=/shibboleth-sp/main.css/ MetadataProvider type=XML uri=https://www.testshib.org/metadata/testshib-providers.xml; backingFilePath=/tmp/testshib-two-idp-metadata.xml reloadInterval=18 /AttributeExtractor type=XML validate=true path=attribute-map.xml/AttributeResolver type=Query subjectMatch=true/AttributeFilter type=XML validate=true path=attribute-policy.xml/CredentialResolver type=File key=sp-key.pem certificate=sp-cert.pem/ ApplicationOverride id=idp_2 entityID=https://115.112.68.53/shibboleth; !--Sessions lifetime=28800 timeout=3600 checkAddress=false relayState=ss:mem handlerSSL=false-- Sessions lifetime=28800 timeout=3600 checkAddress=false relayState=ss:mem handlerSSL=true cookieProps=; path=/; secure !-- Triggers a login request directly to the TestShib IdP. -- SSO entityID=https://idp.testshib.org/idp/shibboleth; ECP=true SAML2 SAML1/SSOLogoutSAML2 Local/Logout /SessionsMetadataProvider type=XML uri=https://www.testshib.org/metadata/testshib-providers.xml; backingFilePath=/tmp/testshib-two-idp-metadata.xml reloadInterval=18 //ApplicationOverride /ApplicationDefaultsSecurityPolicyProvider type=XML validate=true path=security-policy.xml/ProtocolProvider type=XML validate=true reloadChanges=false path=protocols.xml//SPConfig keystone-httpdWSGIDaemonProcess keystone user=keystone group=nogroup processes=3 threads=10#WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/keystone/main/$1WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/cgi-bin/keystone/main/$1 VirtualHost *:5000LogLevel infoErrorLog /var/log/keystone/keystone-apache-error.logCustomLog /var/log/keystone/ssl_access.log combinedOptions +FollowSymLinks SSLEngine on#SSLCertificateFile /etc/ssl/certs/mycert.pem #SSLCertificateKeyFile /etc/ssl/private/mycert.keySSLCertificateFile /etc/apache2/ssl/server.crtSSLCertificateKeyFile /etc/apache2/ssl/server.keySSLVerifyClient optional SSLVerifyDepth 10SSLProtocol all -SSLv2SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOWSSLOptions +StdEnvVars +ExportCertData WSGIScriptAlias / /var/www/cgi-bin/keystone/mainWSGIProcessGroup keystone/VirtualHost VirtualHost *:35357LogLevel infoErrorLog /var/log/keystone/keystone-apache-error.logCustomLog /var/log/keystone/ssl_access.log combinedOptions +FollowSymLinks SSLEngine on SSLEngine on#SSLCertificateFile /etc/ssl/certs/mycert.pem #SSLCertificateKeyFile /etc/ssl/private/mycert.keySSLCertificateFile /etc/apache2/ssl/server.crtSSLCertificateKeyFile /etc/apache2/ssl/server.key
[openstack-dev] [oslo_messaging]Notification listener for nova
Hi, I'm trying to listen to events of type compute.instance.* (e.g. compute.instance.update). I tried the following code: from oslo_config import cfg import oslo_messaging class NotificationEndpoint(object): filter_rule = oslo_messaging.NotificationFilter(publisher_id='compute.*', event_type='compute.instance.update', context={'ctxt_key': 'regexp'}) def info(self, ctxt, publisher_id, event_type, payload, metadata): print(payload) def warn(self, ctxt, publisher_id, event_type, payload, metadata): print(payload) transport = oslo_messaging.get_transport(cfg.CONF) targets = [oslo_messaging.Target(topic='notifications'), oslo_messaging.Target(topic='notifications_bis')] endpoints = [NotificationEndpoint()] pool = listener-workers server = oslo_messaging.get_notification_listener(transport=transport, targets=targets, endpoints=endpoints, pool=pool) server.start() server.wait() then, from Horizon i changed an instance name (which should call _send_instance_update_notification) but i didn't get any notification. Any ideas? Thanks, -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.ma...@cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy
After update my devstack to obtain b9f2e25fa8afb2ea17a89ed76c4fac03689b5f07, it works now. Thanks very much. -chen -Original Message- From: Solly Ross [mailto:sr...@redhat.com] Sent: Tuesday, March 03, 2015 11:48 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy You have to make sure it's in ENABLED_SERVICES in stackrc. It was removed by default, but then apparently restored via b9f2e25fa8afb2ea17a89ed76c4fac03689b5f07, so if you have that commit, you should be good. Otherwise, you can either add it to the ENABLED_SERVICES variable (either in stackrc, if in localrc if you've already overridden it there), or just call `enable_service n-novnc` in localrc. Best Regards, Solly Ross - Original Message - From: Chen Li chen...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Monday, March 2, 2015 9:59:27 PM Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy Sorry, what you mean Double-check no make sure that it's enabled ? I do set the following in my local.conf: enable_service n-nonvc NOVA_VNC_ENABLED=True NOVNCPROXY_URL=http://192.168.6.91:6080/vnc_auto.html; VNCSERVER_LISTEN=0.0.0.0 VNCSERVER_PROXYCLIENT_ADDRESS=192.168.6.91 Also, I tried to install package novnc python-novnc by apt-get install. Then I re-run ./stack.sh, the devstack installation failed, and complaining about the version for module six is wrong. In order to make my devstack work again, I removed the 2 packages, but devstack installation still failed due to the same issue. Thanks. -chen -Original Message- From: Solly Ross [mailto:sr...@redhat.com] Sent: Tuesday, March 03, 2015 12:52 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy Double-check no make sure that it's enabled. A couple months ago, noVNC got removed from the standard install because devstack was installing it from GitHub. - Original Message - From: Chen Li chen...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Sunday, March 1, 2015 7:14:51 PM Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy That's' the most confusing part. I don't even have a log for service nova-novncproxy. Thanks. -chen -Original Message- From: Kashyap Chamarthy [mailto:kcham...@redhat.com] Sent: Monday, March 02, 2015 12:16 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy On Sat, Feb 28, 2015 at 06:20:54AM +, Li, Chen wrote: Hi all, I'm trying to install a fresh all-in-one openstack environment by devstack. After the installation, all services looks well, but I can't open instance console in Horizon. I did a little check, and found service nova-novncproxy was not started ! What do you see in your 'screen-n-vnc.log' (I guess) log? I don't normally run Horizon or nova-vncproxy (only n-cpu, n-sch, n-cond), these are the ENABLED_SERVICES in my minimal DevStack config (Nova, Neutron, Keystone and Glance): ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabb it ,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta [1] https://kashyapc.fedorapeople.org/virt/openstack/2-minimal_devstack_ lo calrc.conf Anyone has idea why this happened ? Here is my local.conf : http://paste.openstack.org/show/183344/ My os is: Ubuntu 14.04 trusty 3.13.0-24-generic -- /kashyap __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [Manila] Ceph native driver for manila
Am 03.03.2015 um 19:31 schrieb Deepak Shetty: [...] For us security is very critical, as the performance is too. The first solution via ganesha is not what we prefer (to use CephFS via p9 and NFS would not perform that well I guess). The second solution, to use CephFS directly to the VM would be a bad solution from the security point of view since we can't expose the Ceph public network directly to the VMs to prevent all the security issues we discussed already. Is there any place the security issues are captured for the case where VMs access CephFS directly ? No there isn't any place and this is the issue for us. I was curious to understand. IIUC Neutron provides private and public networks and for VMs to access external CephFS network, the tenant private network needs to be bridged/routed to the external provider network and there are ways neturon achives it. Are you saying that this approach of neutron is insecure ? I don't say neutron itself is insecure. The problem is: we don't want any VM to get access to the ceph public network at all since this would mean access to all MON, OSDs and MDS daemons. If a tenant VM has access to the ceph public net, which is needed to use/mount native cephfs in this VM, one critical issue would be: the client can attack any ceph component via this network. Maybe I misses something, but routing doesn't change this fact. Danny __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] The future of Xen + OVS support
On Mar 3, 2015, at 1:44 PM, Bob Ball bob.b...@citrix.com wrote: We certainly care about this but converting the XenServer CI to Neutron+OVS has been hitting a number of issues (notably a few concurrency ones – although there are a few missing features) that we’ve been trying to sort through. I am certainly hoping that we’ll have everything stable enough to set up a CI specifically for this combination before the Kilo release and would be very keen not to deprecate XenAPI support in Kilo. Certainly even if we don’t make the Kilo release for the CI we will still be pushing for it in L. Bob I guess that answers the question of whether anyone cares enough to work on enabling 3rd party CI, and that it may be premature to consider deprecation. Until CI is live, though, and in the absence of resources committed to fixing breakage when it is detected, I’m not sure it’s reasonable that core reviewers have to consider Xen support in determining whether a patch is ready to merge. Given that, it’s entirely possible that we’ll end up shipping broken Xen support for kilo, especially given the changes coming to rootwrap and ovs interaction. Should we as a community consider that acceptable with the hope that fixes will be proposed and backported in a timely-enough fashion? Or maybe we should consider moving Xen support (which is centered on the ovs agent) out of the tree so that it can be maintained independent of the OpenStack release cycle? Maru From: Kevin Benton [mailto:blak...@gmail.com] Sent: 03 March 2015 21:06 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] The future of Xen + OVS support It might make sense to send this to the users list as well. There may be a large deployment out there with resources willing to at least test changes even if they don't have any upstream development resources. On Tue, Mar 3, 2015 at 12:50 PM, Maru Newby ma...@redhat.com wrote: There have been a couple of patches [1] [2] proposed to neutron recently that could impact our support for Xen+OVS, but I don’t see an easy way for us to validate such changes. I’m not aware of a 3rd party CI job that runs against Xen, and I don’t know of any active contributors able to manually validate changes. Unless this changes, I think we should consider deprecating support for Xen in kilo. We can’t afford to block changes for fear of breaking something that nobody seems to care about. I’m hoping this post serves as a wake-up call to anyone that wants to see neutron maintain its support for Xen, and that they will be willing to devote resources to ensure that support for it continues. I’ve also added this issue to next week’s irc meeting [3]. Maru 1: https://review.openstack.org/#/c/148969/ 2: https://review.openstack.org/#/c/158805 3: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstackclient] doodle for meeting time selection
On Thu, Feb 26, 2015 at 3:32 PM, Doug Hellmann d...@doughellmann.com wrote: As we discussed in the meeting today, I’ve created a Doodle to coordinate a good day and time for future meetings. I picked a bunch of options based on when it looked like there were IRC rooms obviously available. If none of these options suit us, I can dig harder to find other open times. http://doodle.com/4uy5w2ehn8y2eayh Thanks Doug. At this point two times are at the top of the list. Since one of them is one hour later than the time we originally proposed and the other is the same time on Monday, I propose we declare our first choice to move the meeting one hour later to 19:00 UTC on Thursdays in #openstack-meeting. dt -- Dean Troyer dtro...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] [all] glance_store release 0.1.12
The glance_store release management team is pleased to announce: glance_store version 0.1.12 has been released on Tuesday March 3rd around 23.51 UTC. For more information, please find the details at: https://launchpad.net/glance-store/+milestone/v0.1.12 Please report the issues through launchpad: https://bugs.launchpad.net/glance-store Thanks, -Nikhil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] Getting to a 1.0
Hi, Don't let the subject throw you off :) I wasn't sure how to phrase what I wanted to capture in this mail, and that seemed reasonable enough. I wanted to kick off a discussion about what gaps people think are missing from TripleO before we can meet the goal of realistically being able to use TripleO in production. The things in my mind are: Upgrades - I believe the community is trending away from the image based upgrade rebuild process. The ongoing Puppet integration work is integrated with Heat's SoftwareConfig/SoftwareDeployment features and is package driven. There is still work to be done, especially around supporting rollbacks, but I think this could be part of the answer to how the upgrade problem gets solved. HA - We have an implementation of HA in tripleo-image-elements today. However, the Puppet codepath leaves that mostly unused. The Puppet modules however do support HA. Is that the answer here as well? CLI - We have devtest. I'm not sure if anyone would argue that should be used in production. It could be...but I don't think that was it's original goal and it shows. The downstreams of TripleO that I'm aware of each ended up more of less having their own CLI tooling. Obviously I'm only very familiar with one of the downstreams, but in some instances I believe parts of devtest were reused, and other times not. That begs the question, do we need a well represented unified CLI in TripleO? We have a pretty good story about using Nova/Ironic/Heat[0] to deploy OpenStack, and devtest is one such implementation of that story. Perhaps we need something more production oriented. Baremetal management - To what extent should TripleO venture into this space? I'm thinking things like discovery/introspection, ready state, and role assignment. Ironic is growing features to expose things like RAID management via vendor passthrough API's. Should TripleO take a role in exercising those API's? It's something that could be built into the flow of the unified CLI if we were to end up going that route. Bootstrapping - The undercloud needs to be bootstrapped/deployed/installed itself. We have the seed vm to do that. I've also worked on an implementation to install an undercloud via an installation script assuming the base OS is already installed. Are these the only 2 options we should consider, or are there other ideas that will integrate better into existing infrastructure? Release Cadence with wider OpenStack - I'd love to be able to say on the day that a new release of OpenStack goes live that you can use TripleO to deploy that release in production...and here's how you'd do it What other items should we include here? I almost added a point for Stability, but let's just assume we want to make everything as stable as we possibly can :). I know I've mostly raised questions. I have some of my own answers in mind. But, I was actually hoping to get others talking about what the right answers might be. [0] Plus the other supporting cast of characters: Keystone/Glance/Neutron/Swift. Thanks. -- -- James Slagle -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs
Added [swift] to topic. On 03/03/2015 07:41 AM, Matthew Farina wrote: Radoslaw, Unfortunately the documentation for OpenStack has some holes. What you are calling a private API may be something missed in the documentation. Is there a documentation bug on the issue? If not one should be created. There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP headers are part of the public Swift API: http://developer.openstack.org/api-ref-objectstorage-v1.html I don't believe this is a bug in the Swift API documentation, either. John Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is required for the Swift implementation of container replication (John, please do correct me if wrong on that). But that is the private implementation and not part of the public API. In practice OpenStack isn't a specification and implementation. The documentation has enough missing information you can't treat it this way. If you want to contribute to improving the documentation I'm sure the documentation team would appreciate it. The last time I looked there were a number of undocumented public swift API details. The bug here is not in the documentation. The bug is that Horizon is coded to rely on HTTP headers that are not in the Swift API. Horizon should be fixed to use DICT.get('X-Timestamp') instead of doing DICT['X-Timestamp'] in its view pages for container details. There are already patches up that the Horizon developers have, IMO erroneously, rejected stating this is a problem in Ceph RadosGW for not properly following the Swift API). Best, -jay Best of luck, Matt Farina On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski rzarzyn...@mirantis.com mailto:rzarzyn...@mirantis.com wrote: Guys, I would like discuss a problem which can be seen in Horizon: breaking the boundaries of public, well-specified Object Storage API in favour of utilizing a Swift-specific extensions. Ticket #1297173 [1] may serve as a good example of such violation. It is about relying on non-standard (in the terms of OpenStack Object Storage API v1) and undocumented HTTP header provided by Swift. In order to make Ceph RADOS Gateway work correctly with Horizon, developers had to inspect sources of Swift and implement the same behaviour. From my perspective, that practise breaks the the mission of OpenStack which is much more than delivering yet another IaaS/PaaS implementation. I think its main goal is to provide a universal set of APIs covering all functional areas relevant for cloud computing, and to place that set of APIs in front as many implementations as possible. Having an open source reference implementation of a particular API is required to prove its viability, but is secondary to having an open and documented API. I have full understanding that situations where the public OpenStack interfaces are insufficient to get the work done might exist. However, introduction of dependency on implementation-specific feature (especially without giving the users a choice via e.g. some configuration option) is not the proper way to deal with the problem. From my point of view, such cases should be handled with adoption of new, carefully designed and documented version of the given API. In any case I think that Horizon, at least basic functionality, should work with any storage which provides Object Storage API. That being said, I'm willing to contribute such patches, if we decide to go that way. Best regards, Radoslaw Zarzynski [1] https://bugs.launchpad.net/horizon/+bug/1297173 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?
Logged a bug [1] and submitted a fix [2]. Review away! [1] https://bugs.launchpad.net/nova/+bug/1427944 [2] https://review.openstack.org/#/c/161069/ - Joe On Mar 3, 2015, at 4:42 PM, Jay Pipes jaypi...@gmail.com wrote: On 03/03/2015 06:55 AM, Joe Cropper wrote: On Mar 3, 2015, at 8:34 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: I’m pretty sure it has always done this: leave the host set on the final scheduling attempt. I agree that this could be cleared which would free up room for future scheduling attempts. Thanks Vish for the comment. Do we know if this is an intended feature or would we consider this a bug? It seems like we could free this up, as you said, to allow room for additional VMs, especially since we know it didn’t successfully deploy anyway? Seems like a bug to me. Feel free to create one in Launchpad and we'll get on it. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] Ceph native driver for manila
On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote: Am 03.03.2015 um 19:31 schrieb Deepak Shetty: [...] For us security is very critical, as the performance is too. The first solution via ganesha is not what we prefer (to use CephFS via p9 and NFS would not perform that well I guess). The second solution, to use CephFS directly to the VM would be a bad solution from the security point of view since we can't expose the Ceph public network directly to the VMs to prevent all the security issues we discussed already. Is there any place the security issues are captured for the case where VMs access CephFS directly ? No there isn't any place and this is the issue for us. I was curious to understand. IIUC Neutron provides private and public networks and for VMs to access external CephFS network, the tenant private network needs to be bridged/routed to the external provider network and there are ways neturon achives it. Are you saying that this approach of neutron is insecure ? I don't say neutron itself is insecure. The problem is: we don't want any VM to get access to the ceph public network at all since this would mean access to all MON, OSDs and MDS daemons. If a tenant VM has access to the ceph public net, which is needed to use/mount native cephfs in this VM, one critical issue would be: the client can attack any ceph component via this network. Maybe I misses something, but routing doesn't change this fact. Agree, but there are ways you can restrict the tenant VMs to specific network ports only using neutron security groups and limit what tenant VM can do. On the CephFS side one can use selinux labels to provide addnl level of security for Ceph daemons, where in only certain process can access/modify them, I am just thinking aloud here, i m not sure how well cephfs works with selinux combined. Thinking more, it seems like then you need a solution that goes via the serviceVM approach but provide native CephFS mounts instead of NFS ? thanx, deepak Danny __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Call for mentors - Outreachy (former OPW)
@Christian I'm so glad to hear you are interested! I'm adding more details below :) @John Thanks for sharing your experience! It's great to know that both you and mahatic enjoyed the internship So, more details about the internship For mentees, From March 3rd to March 24th mentees have time to get in touch with mentors in OpenStack and: - Make their first contribution. Just submit a fix for a low-hanging-fruit bug. - Discuss details of a task they will do during the internship - Submit their applications The internship starts on May 25th. For mentors, Mentoring shouldn't be time consuming. We make sure during the selection process that the mentee is independent and proactive enough to carry on everyday tasks. That doesn't mean that the applicant has to know to do everything, but she has to be able to solve most problems on her own, with a little help of her mentor if needed. Every project within OpenStack is encouraged to join Outreachy. Mentors should propose a task able to be solved during the internship time. This is not always the case though, there a tasks that for one reason or another are delayed. So if that happens, its not a problem. Check out Outreachy ideas page to see some tasks from this round and previous rounds [0]. For the application period, mentors should be able to discuss about tasks ideas with applicants and draft a timeline for it. Its also appreciated if they can find a low-hanging-fruit bug for applicants to take and fix. For the internship period, you can arrange meetings, deadlines and every other milestone you consider necessary with your mentee. There are no constrains about it. Feel free to reach me on irc.freenode.org in #openstack-opw if you have any doubt or concern. Thanks, [0] https://wiki.openstack.org/wiki/OutreachProgramForWomen/Ideas 2015-03-03 15:24 GMT-03:00 John Dickinson m...@not.mn: On Mar 3, 2015, at 3:38 AM, Victoria Martínez de la Cruz victo...@vmartinezdelacruz.com wrote: Hi all, Next round application deadline for Outreachy is close and and we need mentors willing to help newcomers to get involved with OpenStack and help them start contributing [0][1]. This would be the 5th time OpenStack joins Outreachy and for now we can say we are having a great outcome for this: we got many new strong contributors, we had several interesting new features and we are helping many women to start their careers in FOSS. The mentor task is not very time consuming and its a really good experience. Please, if you are interested, feel free to contact me and I'll give you further details. I'd like to echo this statement, as one who was a mentor during this last round. It's a good experience that doesn't take a lot of extra time. I was initially worried about the time commitment, but it's ended up being office hours a few times a week in IRC (where I am all the time anyway) and email correspondence with the intern. If you are considering being a mentor, I'd encourage it. It's a great learning experience and rewarding for both the mentor and intern. --John Thanks a lot, [0] https://wiki.openstack.org/wiki/OutreachProgramForWomen [1] https://www.gnome.org/outreachy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Meeting Thursday March 5th at 22:00 UTC
Hi everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be Thursday, March 5th at 22:00 UTC in the #openstack-meeting channel. The agenda for the meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. This meeting is also the first of the dedicated monthly tempest test removal meetings. The process for this is outlined here: https://wiki.openstack.org/wiki/QA/Tempest-test-removal Basically the meeting will primarily be dedicated to discussing the proposal removals that have been put on the proposal etherpad. So if everyone who has an interest in this could review this month's proposed tempest removals: https://etherpad.openstack.org/p/tempest-test-removals and outline any objections to proposed removals on the etherpad. To help people figure out what time 22:00 UTC is in other timezones tomorrow's meeting will be at: 17:00 EST 07:00 JST 08:30 ACDT 23:00 CET 16:00 CST 14:00 PST -Matt Treinish pgpiu6u6mCPPS.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler
Hi all, I want to make it easy to launch a bunch of scheduler processes on a host, multiple scheduler workers will make use of multiple processors of host and enhance the performance of nova-scheduler. I had registered a blueprint and commit a patch to implement it. https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support This patch had applied in our performance environment and pass some test cases, like: concurrent booting multiple instances, currently we didn't find inconsistent issue. IMO, nova-scheduler should been scaled horizontally on easily way, the multiple workers should been supported as an out of box feature. Please feel free to discuss this feature, thanks. Best Regards __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][log][api] Reminder: Log Working Group meeting 3/4/2015 -- Wednesday 20:00UTC
Log Working Group is a cross project (horizontal) and community group that is working to rationalize log messages and logging practices across the OpenStack ecosystem. This has been identified as a big concern within the user communities. Anyone with an interest in improving logs, logging and documentation around this is encouraged to join in the discussion. First meeting: 3/4/2015 20:00UTC in #openstack-meeting-4 https://wiki.openstack.org/wiki/Meetings/log-wg We'll discuss: * Intro/Getting organized/ Agenda * Error Codes (from the mailing list, and from Kilo Summit) * Action Items for Ops Midcycle meetup * Priorities * General Discussion https://wiki.openstack.org/wiki/LogWorkingGroup Thanks, --Rocky Grober __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Question about boot-from-volume instance and flavor
Thank you for reply, @Jay. +1 for There should not be any magic setting for root_gb that needs to be interpreted both by the user and the Nova code base. I will try to restart the patch 136284 on the other way, like: instance object. Best Regards 2015-03-04 4:45 GMT+08:00 Jay Pipes jaypi...@gmail.com: On 03/03/2015 01:10 AM, Rui Chen wrote: Hi all, When we boot instance from volume, we find some ambiguous description about flavor root_gb in operations guide, http://docs.openstack.org/openstack-ops/content/flavors.html /Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. / /The 0 size is a special case that uses the native base image size as the size of the ephemeral root volume./ / / 'You don't use it(root_gb) when you boot from a persistent volume.' It means that we need to set the root_gb to 0 or not? I don't know. Hi Rui, I agree the documentation -- and frankly, the code in Nova -- is confusing around this area. But I find out that the root_gb will been added into local_gb_used of compute_node so that it will impact the next scheduling. Think about a use case, the local_gb of compute_node is 10, boot instance from volume with the root_gb=5 flavor, in this case, I can only boot 2 boot-from-volume instances on the compute_nodes, although these instances don't use the local disk of compute_nodes. I find a patch that try to fix this issue, https://review.openstack.org/#/c/136284/ I want to know that which solution is better for you? Solution #1: boot instance from volume with the root_gb=0 flavor. Solution #2: add some special logic in order to correct the disk usage, like patch #136284 Solution #2 is a better idea, IMO. There should not be any magic setting for root_gb that needs to be interpreted both by the user and the Nova code base. The issue with the 136284 patch is that it is trying to address the problem in the wrong place, IMHO. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][neutron] allowed_address_pairs does not work
While it is entirely possible that the feature is broken, it seems that in this case you're expecting the allowed_address_pairs to populate fixed_ips. Neutron does many crazy and unreasonable things but asking you to pass an attribute in the request to populate another attribute is not one of these! Basically allowed address pairs are MAC/IP pairs for which you allow traffic on a port, but that are not managed by neutron. This means that, in your case, if you defined an additional IP address and set it to 192.168.0.58 in your instance, Neutron would allow traffic from or to that address. If you did not explicitly add that address in allowed_address_pairs neutron would block traffic to and from it. From the CLI, you should be able to see allowed address pairs configured on a port with neutron port-show If you wanted to configure 192.168.0.58 as your port's IP address and let neutron manage it, you should be able to use the fixed_ips attribute, although I don't know how to leverage that through Heat templates. Salvatore On 3 March 2015 at 15:41, Jay Lau jay.lau@gmail.com wrote: Hi, I see that the neutron port resource has a property named as allowed_address_pairs and I tried to use this property to create a port, but seems it does not working. I want to create a port with mac as fa:16:3e:05:d5:9f and ip as 192.168.0.58, but after create with a heat template, the final neutron port mac is fa:16:3e:01:45:bb and ip is 192.168.0.62, can someone show me where is wrong in my configuration? Also allowed_address_pairs is a list, does it means that I can create a port with multiple mac and ip address, if this is the case, then when create a VM with this port, does it mean that the VM can have multiple macip? [root@prsdemo2 ~]# cat port-3.yaml heat_template_version: 2013-05-23 description: HOT template to create a new neutron network plus a router to the public network, and for deploying two servers into the new network. The template also assigns floating IP addresses to each server so they are routable from the public network. resources: server1_port: type: OS::Neutron::Port properties: allowed_address_pairs: - mac_address: fa:16:3e:05:d5:9f ip_address: 192.168.0.58 network: demonet [root@prsdemo2 ~]# heat stack-create -f ./port-3.yaml p3 +--+++--+ | id | stack_name | stack_status | creation_time| +--+++--+ | 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_IN_PROGRESS | 2015-03-03T14:35:49Z | +--+++--+ [root@prsdemo2 ~]# heat stack-list +--++-+--+ | id | stack_name | stack_status| creation_time| +--++-+--+ | 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_COMPLETE | 2015-03-03T14:35:49Z | +--++-+--+ [root@prsdemo2 ~]# neutron port-list +--+--+---+-+ | id | name | mac_address | fixed_ips | +--+--+---+-+ | 8d20b3a4-024a-4613-9d26-3d49534a839c | p3-server1_port-op3w5yzyks5i | fa:16:3e:01:45:bb | {subnet_id: 4e7b6983-7364-4a71-8d9c-580d88fd4797, ip_address: 192.168.0.62} | +--+--+---+-+ -- Thanks, Jay Lau (Guangya Liu) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Graduating oslo.reports: Request to review clean copy
On Tue, Feb 24, 2015, at 08:15 PM, Solly Ross wrote: Those scripts have mostly moved into oslotest, so they don't need to be synced any more to be used. If you have all of your code, and it follows the cookiecutter template, we can look at it and propose post-import tweaks. What's the URL for the repository? Heh, whoops. I should probably have included that. It's at https://github.com/directxman12/oslo.reports The contents there look OK to start the process of importing them. We'll have to fix up the tests (I'm seeing failures because of sort order in dictionaries and some pep8 failures), but those fixes can wait. Doug Thanks! Best Regards, Solly __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance] [trove] [heat] [nova] [all] Handling forwarded requests
Ian, thanks for raising the issue here. The X-Forwarded headers are the standard way to deal with URLs for services behind proxies. I already commented on the Heat proposal to that effect, I think that is the proper way to support services behind proxies. Now in our case, there is also another source of external URLs, the keystone catalog. One can assume the catalog has the externally visible URL under PUBLIC_URL, so I would like to suggest that if the X-Forwarded headers aren't present, the catalog would be a good second source. I think having a hardcoded entry in the config (as the glance merged patch does) is not a bad idea as an override for special situations, but I really see no need for that to be the only solution to this problem. Web applications have been working behind proxies using the X-Forwarded headers for a long time, it's a nice and proven solution, in my opinion. Miguel On Tue, Mar 3, 2015 at 9:09 AM, Ian Cordasco ian.corda...@rackspace.com wrote: Hey all, It appears that currently a number of OpenStack services are not generating version catalogs correctly when the service sits behind a proxy. (Reference: https://bugs.launchpad.net/glance/+bug/1384379) Glance already has a fix that was accepted for kilo-1 but it is suboptimal and assumes there’s only one proxy-address that will forward the request (which obviously is not true in every case). There exists RFC 7239 (http://tools.ietf.org/html/rfc7239) which is recent but defines a “Forwarded” header with explicit parameters that we should be using. There are also the defacto standards of X-Forwarded-By and X-Forwarded-Host that we should be inspecting as well. Currently, Glance’s solution is being copied over to other projects (Trove, Heat, Nova) and it is clearly suboptimal. I’m going to work on a more general solution for this, but if anyone can hammer one out faster, don’t be afraid to submit it. This is absolutely a bug that should be a high priority for us. Cheers, Ian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Break_on in Retry policy
Hello, Recently we've found that break_on property of RetryPolicy is not working now. I tried to solve this problem but faced the another problem: How does 'break_on' supposed to work? Will 'break_on' change the task state to ERROR or SUCCESS? if ERROR, it means 'we interrupt all retry iterations and keep the state which was before'. But if SUCCESS, it means 'we interrupt all retry iterations and assume SUCCESS state from task because we cought this condition'. This is ambiguous. There is a suggestion to use not just 'break_on' but, say, another, more explicit properties which will delete this ambiguity. For example, 'success_on' and 'error_on'. Thoughts? - Best Regards, Nikolay @Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Should Nova delete the implicit volume and volume snapshot it creates?
Hi, I have a question regarding the implicit volume resources that created by Nova during persistent VM's snapshot creation. Here is a sample scenario that leaves a set of volume resources as residue, 1. Create a bootable volume, bv_1 2. Create an instance vm_1, from the bootable volume bv_1 3. Create an image snapshot of vm_1, vm_1_img. This implicitly creates a volume snapshot of bv_1, bv_1_vsnap. 4. Create an instance vm_2 from vm_1_img. This implicitly creates a bootable volume from volume snapshot, bv_1_vsnap, boot_vol2 5. Delete vm_1, vm_2, vm_1_img. Now try to delete bv_1. This would fail as there is a dependent volume snapshot. Shouldn't Nova should do cleanup of these resources that are created implicitly? Regards Nithya __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])
-Original Message- From: Thierry Carrez [mailto:thie...@openstack.org] Sent: 03 March 2015 10:00 To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics]) Doug Wiegley wrote: [...] But I think some of the push back in this thread is challenging this notion that abandoning is negative, which you seem to be treating as a given. I don't. At all. And I don't think I'm alone. I was initially on your side: the abandoned patches are not really deleted, you can easily restore them. So abandoned could just mean inactive or stale in our workflow, and people who actually care can easily unabandon them to make them active again. And since abandoning is currently the only way to permanently get rid of stale / -2ed / undesirable changes anyway, so we should just use that. But words matter, especially for new contributors. For those contributors, someone else abandoning a proposed patch of theirs is a pretty strong move. To them, abandoning should be their decision, not yours (reviewers can -2 patches). Launchpad used to have a similar struggle between real meaning and workflow meaning. It used to have a single status for rejected bugs (Invalid). In the regular bug workflow, that status would be used for valid bugs that you just don't want to fix. But then that created confusion to people outside that workflow since the wrong word was used. So WontFix was introduced as a similar closed state (and then they added Opinion because WontFix seemed too harsh, but that's another story). We have (like always) tension around the precise words we use. You say Abandon is generally used in our community to set inactive. Jim says Abandon should mean abandon and therefore should probably be left to the proposer, and other ways should be used to set inactive. There are multiple solutions to this naming issue. You can rename abandon so that it actually means set inactive or mark as stale. Or you can restrict abandon to the owner of a change, stop defaulting to is:open to list changes, and introduce features in Gerrit so that a is:active query would give you the right thing. But that query would need to be the Gerrit default, not some obscure query you can run or add to your dashboard -- otherwise we are back at step 1. -- Thierry Carrez (ttx) I'd like to ask few questions regarding this as I'm very much pro cleaning the review queues of abandoned stuff. How often people (committer/owner/_reviewer_) abandon changes actively? Now I do not mean the reviewer here only cores marking other peoples abandoned PSs as abandoned I mean how many times you have seen person stating that (s)he will not review a change anymore? I haven't seen that, but I've seen lots of changes where person has reviewed it on some early stage and 10 revisions later still not given ones input again. What I'm trying to say here is that it does not make the change any less abandoned if it's not marked abandoned by the owner. It's rarely active process. Regarding the contributor experience, I'd say it's way more harmful not to mark abandoned changes abandoned than do so. If the person really don't know and can't figure out how to a) join the mailing list b) get to irc c) write a comment to the change or d) reach out anyone in the project in any other means to express that (s)he does not know how to fix the issue flagged in weeks, I'm not sure if we will miss that person as a contributor so much either? And yes, the message should be strong telling that the change has passed the point it most probably will have no traction anymore and active action needs to be taken to continue the workflow. Same time lets turn this around. How many new contributors we drive away because of the reaction Whoa, this many changes have been sitting here for weeks, I have no chance to get my change quickly in? Specifically to Nova, Swift Cinder folks: How much do you see benefit on bug lifecycle management with the abandoning? I would assume bugs that has message their proposed fix abandoned getting way more traction than the ones where the fix has been stale in queue for weeks. And how many of those abandoned ones gets reactivated? Last I'd like to point out that life is full of disappointments. We should not try to keep our community in bubble where no-one ever gets disappointed nor their feelings never gets hurt. I do not appreciate that approach on current trend of raising children and I definitely do not appreciate that approach towards adults. Perhaps the people with bad experience will learn something and get over it or move on. Neither is bad for the community. - Erno __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?
Bump. I'd really appreciate some answers to the question Sean asked. I still have the 2.4 in my review (the very one Sean mentioned) but it seems that it might not be the case. Best regards, Alex Levine On 3/2/15 2:30 PM, Sean Dague wrote: This change for the additional attributes for ec2 looks like it's basically ready to go, except it has the wrong microversion on it (as they anticipated the other changes landing ahead of them) - https://review.openstack.org/#/c/155853 What's the plan for merging the outstanding microversions? I believe we're all conceptually approved on all them, and it's an important part of actually moving forward on the new API. It seems like we're in a bit of a holding pattern on all of them right now, and I'd like to make sure we start merging them this week so that they have breathing space before the freeze. -Sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)
https://review.openstack.org/#/c/159840/1/doc/source/testing/openflow-firewall.rst I may need some help from the OVS experts to answer the questions from henry.hly. Ben, Thomas, could you please? (let me know if you are not registered to the openstack review system, I could answer in your name). Best, Miguel Ángel Ajo On Friday, 27 de February de 2015 at 14:50, Miguel Ángel Ajo wrote: Ok, I moved the document here [1], and I will eventually submit another patch with the testing scripts when those are ready. Let’s move the discussion to the review!, Best, Miguel Ángel Ajo [1] https://review.openstack.org/#/c/159840/ On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote: Sounds promising. We'll have to evaluate it for feature parity when the time comes. On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com (mailto:b...@nicira.com) wrote: This sounds quite similar to the planned support in OVN to gateway a logical network to a particular VLAN on a physical port, so perhaps it will be sufficient. On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote: If a port is bound with a VLAN segmentation type, it will get a VLAN id and a name of a physical network that it corresponds to. In the current plugin, each agent is configured with a mapping between physical networks and OVS bridges. The agent takes the bound port information and sets up rules to forward traffic from the VM port to the OVS bridge corresponding to the physical network. The bridge usually then has a physical interface added to it for the tagged traffic to use to reach the rest of the network. On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com (mailto:b...@nicira.com) wrote: What kind of VLAN support would you need? On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote: If OVN chooses not to support VLANs, we will still need the current OVS reference anyway so it definitely won't be wasted work. On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo majop...@redhat.com (mailto:majop...@redhat.com) wrote: Sharing thoughts that I was having: May be during the next summit it???s worth discussing the future of the reference agent(s), I feel we???ll be replicating a lot of work across OVN/OVS/RYU(ofagent) and may be other plugins, I guess until OVN and it???s integration are ready we can???t stop, so it makes sense to keep development at our side, also having an independent plugin can help us iterate faster for new features, yet I expect that OVN will be more fluent at working with OVS and OpenFlow, as their designers have a very deep knowledge of OVS under the hood, and it???s C. ;) Best regards, On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com (mailto:majop...@redhat.com) wrote: On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo wrote: Inline comments follow after this, but I wanted to respond to Brian questionwhich has been cut out: We???re talking here of doing a preliminary analysis of the networking performance,before writing any real code at neutron level. If that looks right, then we should go into a preliminary (and orthogonal to iptables/LB)implementation. At that moment we will be able to examine the scalability of the solutionin regards of switching openflow rules, which is going to be severely affectedby the way we use to handle OF rules in the bridge: * via OpenFlow, making the agent a ???real OF controller, with the current effort to use the ryu framework plugin to do that. * via cmdline (would be alleviated with the current rootwrap work, but the former one would be preferred). Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff for theexplanation, if you???re reading this ;-)) Best,Miguel ??ngel On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote: Hi, The RFC2544 with near zero packet loss is a pretty standard performance benchmark. It is also used in the OPNFV project ( https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases ). Does this mean that OpenStack will have stateful firewalls (or security groups)? Any other ideas planned, like ebtables type filtering? What I am proposing is in the terms of maintaining the statefulness we have nowregards security groups
[openstack-dev] [nova] Question about boot-from-volume instance and flavor
Hi all, When we boot instance from volume, we find some ambiguous description about flavor root_gb in operations guide, http://docs.openstack.org/openstack-ops/content/flavors.html *Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. * *The 0 size is a special case that uses the native base image size as the size of the ephemeral root volume.* 'You don't use it(root_gb) when you boot from a persistent volume.' It means that we need to set the root_gb to 0 or not? I don't know. But I find out that the root_gb will been added into local_gb_used of compute_node so that it will impact the next scheduling. Think about a use case, the local_gb of compute_node is 10, boot instance from volume with the root_gb=5 flavor, in this case, I can only boot 2 boot-from-volume instances on the compute_nodes, although these instances don't use the local disk of compute_nodes. I find a patch that try to fix this issue, https://review.openstack.org/#/c/136284/ I want to know that which solution is better for you? Solution #1: boot instance from volume with the root_gb=0 flavor. Solution #2: add some special logic in order to correct the disk usage, like patch #136284 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat][Mistral] How to use Mistral resources in Heat
Hi Peter, Thanks for sharing this.. Overall it looks good to me, I just left a couple of comments/questions in https://review.openstack.org/#/c/147645/17/contrib/heat_mistral/heat_mistral/tests/test_workflow.py https://review.openstack.org/#/c/147645/17/contrib/heat_mistral/heat_mistral/tests/test_workflow.py Could you please take a look? We keep in mind that one thing is still not implemented: workflow references get broken if we upload a workflow which calls another workflow. We need to discuss the best way to deal with that. Either we need to do in Heat itself to restore those references accounting for stack name etc.. Or we need to provide some facility in Mistral itself. Other than that it looks to be ready to start gathering Heat folks’ feedback. Renat Akhmerov @ Mirantis Inc. On 26 Feb 2015, at 17:49, Peter Razumovsky prazumov...@mirantis.com wrote: In anticipation of Mistral supporting in Heat, let's introduce using Mistral in Heat. 1. For using Mistral resources workflow and cron-trigger in Heat, Mistral must be installed to DevStack. Installation guide for DevStack on https://github.com/stackforge/mistral/tree/master/contrib/devstack https://github.com/stackforge/mistral/tree/master/contrib/devstack Detailed information about Workflow and CronTrigger you can find on https://wiki.openstack.org/wiki/Mistral https://wiki.openstack.org/wiki/Mistral Note, that currently Mistral use DSLv2 (https://wiki.openstack.org/wiki/Mistral/DSLv2 https://wiki.openstack.org/wiki/Mistral/DSLv2) and Rest API v2 (https://wiki.openstack.org/wiki/Mistral/RestAPIv2 https://wiki.openstack.org/wiki/Mistral/RestAPIv2). 2. When Mistral will be installed, check it accessibility - in screen or using command 'mistral --help' (list of commands). You can test Mistral resources creating workflow resources with DSLv2-formatted definitions, cron-triggers and executions. For example, command 'mistral workflow-list' gives the table: Starting new HTTP connection (1): 192.168.122.104 Starting new HTTP connection (1): localhost +-++--+-++ | Name| Tags | Input| Created at | Updated at | +-++--+-++ | std.create_instance | none | name, image_id, flavor_id... | 2015-01-27 14:16:21 | None | | std.delete_instance | none | instance_id | 2015-01-27 14:16:21 | None | +-++--+-++ 3. Mistral resources for Heat you can find there: https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z 4. Simple templates using Mistral resources in Heat templates you can find there: https://wiki.openstack.org/wiki/Heat_Mistral_resources_usage_examples https://wiki.openstack.org/wiki/Heat_Mistral_resources_usage_examples __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Call for mentors - Outreachy (former OPW)
Hi all, Next round application deadline for Outreachy is close and and we need mentors willing to help newcomers to get involved with OpenStack and help them start contributing [0][1]. This would be the 5th time OpenStack joins Outreachy and for now we can say we are having a great outcome for this: we got many new strong contributors, we had several interesting new features and we are helping many women to start their careers in FOSS. The mentor task is not very time consuming and its a really good experience. Please, if you are interested, feel free to contact me and I'll give you further details. Thanks a lot, [0] https://wiki.openstack.org/wiki/OutreachProgramForWomen [1] https://www.gnome.org/outreachy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings
Hi all! The email, that I sent on 2/24 didn't make it to the mailing list (no wonder I didn't get responses!). I think I had an issue with my email address used - sorry for the confusion! So, I'll hold the meeting today (1500 UTC meeting-4, if it is still available), and we can discuss this... We've been having very low turnout for meetings for the past several weeks, so I'd like to ask those in the community interested in VPNaaS, what the preference would be regarding meetings... A) hold at the same day/time, but only on-demand. B) hold at a different day/time. C) hold at a different day/time, but only on-demand. D) hold as a on-demand topic in main Neutron meeting. Please vote your interest, and provide desired day/time, if you pick B or C. The fallback will be (D), if there's not much interest anymore for meeting, or we can't seem to come to a consensus (or super-majority :) Regards, PCM Twitter: @pmichali TEXT: 6032894458 PCM (Paul Michali) IRC pc_m (irc.freenode.com) Twitter... @pmichali __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Need +A (workflow +1) for https://review.openstack.org/156940
Anteaya, In general i agree, but because of TZ differences, not always you can do that. Also i sent mail only for the case where we had all +1, +2, just needed workflow +1, which I think is justfiable ! thanx, deepak On Tue, Mar 3, 2015 at 1:30 PM, Anita Kuno ante...@anteaya.info wrote: On 03/03/2015 02:17 AM, Deepak Shetty wrote: Hi all, Can someone give +A to https://review.openstack.org/156940 - we have the rest. Need to get this merged for glusterfs CI to pass the snapshot_when_volume_in_use testcases. thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Do not request reviews on the mailing list. Please spend time in the project channel to which you wish to contribute and discuss patch status in there. Thank you, Anita. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?
I’m pretty sure it has always done this: leave the host set on the final scheduling attempt. I agree that this could be cleared which would free up room for future scheduling attempts. Vish On Mar 3, 2015, at 12:15 AM, Joe Cropper cropper@gmail.com wrote: Hi Folks, I was wondering if anyone can comment on the intended behavior of how instance.host is supported to be set during reschedule operations. For example, take this scenario: 1. Assume an environment with a single host… call it host-1 2. Deploy a VM, but force an exception in the spawn path somewhere to simulate some hypervisor error” 3. The scheduler correctly attempts to reschedule the VM, and ultimately ends up (correctly) with a NoValidHost error because there was only 1 host 4. However, the instance.host (e.g., [nova show vm]) is still showing ‘host-1’ — is this the expected behavior? It seems like perhaps the claim should be reverted (read: instance.host nulled out) when we take the exception path during spawn in step #2 above, but maybe I’m overlooking something? This behavior was observed on a Kilo base from a couple weeks ago, FWIW. Thoughts/comments? Thanks, Joe __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Python 3 is dead, long live Python 3
On Tue, Mar 3, 2015, at 07:56 AM, Ihar Hrachyshka wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 02/02/2015 05:15 PM, Jeremy Stanley wrote: After a long wait and much testing, we've merged a change[1] which moves the remainder of Python 3.3 based jobs to Python 3.4. This is primarily in service of getting rid of the custom workers we implemented to perform 3.3 testing more than a year ago, since we can now run 3.4 tests on normal Ubuntu Trusty workers (with the exception of a couple bugs[2][3] which have caused us to temporarily suspend[4] Py3K jobs for oslo.messaging and oslo.rootwrap). I've personally tested `tox -e py34` on every project hosted in our infrastructure which was gating on Python 3.3 jobs and they all still work, so you shouldn't see any issues arise from this change. If you do, however, please let the Infrastructure team know about it as soon as possible. Thanks! [1] https://review.openstack.org/151713 [2] https://launchpad.net/bugs/1367907 [3] https://launchpad.net/bugs/1382607 [4] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html The switch broke Icehouse stable branch for oslo-incubator [1] since those jobs run on Precise and not Trusty. Anyone has ideas how to fix it? The incubator python 3 job was added to help us port incubated code to python 3 before graduating it. We won't be graduating modules from the stable branch, so as long as none of the consuming projects have python 3 jobs on their stable branches we can just drop the icehouse python 3 job for oslo-incubator. Doug [1]: https://review.openstack.org/#/c/136718/ /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJU9a+VAAoJEC5aWaUY1u57mv4H/0Wqi986LUPYzQCQCzcvHlAv Uomd8cvWNYBUzLJjV2r3xrgaKDVsKtJI+vcMllBNH7oigRHXDo6RrkUoV+4jSf4o yzYtU9CXLO/vKuTnJVzsp3xCuu9XI9mE19FHWLYOAhpSFXNg4J6u94yKRIxxcs6H IAaJEuhcJigm7qK10iKESYvw9AxJjZsHaq0No5KsAT+T5FTmfGZ2cbPfkKSo9NgM Zl0gbPTQPSoB8EvefoP8uaUYF1sD+Itgab1GvI6B9sRnkb+f1uaWAA852SaxiA1D Z5IQOwYCPteBJ1ztSrFAQGw8nfgp8H0I3aHwQ/7fgdxPPb8Eqa/wWHlfUnt0nG4= =ZGSJ -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [heat][neutron] allowed_address_pairs does not work
Hi, I see that the neutron port resource has a property named as allowed_address_pairs and I tried to use this property to create a port, but seems it does not working. I want to create a port with mac as fa:16:3e:05:d5:9f and ip as 192.168.0.58, but after create with a heat template, the final neutron port mac is fa:16:3e:01:45:bb and ip is 192.168.0.62, can someone show me where is wrong in my configuration? Also allowed_address_pairs is a list, does it means that I can create a port with multiple mac and ip address, if this is the case, then when create a VM with this port, does it mean that the VM can have multiple macip? [root@prsdemo2 ~]# cat port-3.yaml heat_template_version: 2013-05-23 description: HOT template to create a new neutron network plus a router to the public network, and for deploying two servers into the new network. The template also assigns floating IP addresses to each server so they are routable from the public network. resources: server1_port: type: OS::Neutron::Port properties: allowed_address_pairs: - mac_address: fa:16:3e:05:d5:9f ip_address: 192.168.0.58 network: demonet [root@prsdemo2 ~]# heat stack-create -f ./port-3.yaml p3 +--+++--+ | id | stack_name | stack_status | creation_time| +--+++--+ | 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_IN_PROGRESS | 2015-03-03T14:35:49Z | +--+++--+ [root@prsdemo2 ~]# heat stack-list +--++-+--+ | id | stack_name | stack_status| creation_time| +--++-+--+ | 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_COMPLETE | 2015-03-03T14:35:49Z | +--++-+--+ [root@prsdemo2 ~]# neutron port-list +--+--+---+-+ | id | name | mac_address | fixed_ips | +--+--+---+-+ | 8d20b3a4-024a-4613-9d26-3d49534a839c | p3-server1_port-op3w5yzyks5i | fa:16:3e:01:45:bb | {subnet_id: 4e7b6983-7364-4a71-8d9c-580d88fd4797, ip_address: 192.168.0.62} | +--+--+---+-+ -- Thanks, Jay Lau (Guangya Liu) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] Core nominations.
Nikhil, If I recall correctly this matter was discussed last time at the start of the L-cycle and at that time we agreed to see if there is change of pattern to later of the cycle. There has not been one and I do not see reason to postpone this again, just for the courtesy of it in the hopes some of our older cores happens to make review or two. I think Flavio's proposal combined with the new members would be the right way to reinforce to momentum we've gained in Glance over past few months. I think it's also the right message to send out for the new cores (including you and myself ;) ) that activity is the key to maintain such status. - Erno From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com] Sent: 03 March 2015 04:47 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage questions) Cc: krag...@gmail.com Subject: Re: [openstack-dev] [Glance] Core nominations. Hi all, After having thoroughly thought about the proposed rotation and evaluating the pros and cons of the same at this point of time, I would like to make an alternate proposal. New Proposal: 1. We should go ahead with adding more core members now. 2. Come up with a plan and give additional notice for the rotation. Get it implemented one month into Liberty. Reasoning: Traditionally, Glance program did not implement rotation. This was probably with good reason as the program was small and the developers were working closely together and were aware of each others' daily activities. If we go ahead with this rotation it would be implemented for the first time and would appear to have happened out-of-the-blue. It would be good for us to make a modest attempt at maintaining the friendly nature of the Glance development team, give them additional notice and preferably send them a common email informing the same. We should propose at least a tentative plan for rotation so that all the other core members are aware of their responsibilities. This brings to my questions, is the poposed list for rotation comprehensive? What is the basis for missing out some of them? What would be a fair policy or some level of determinism in expectations? I believe that we should have input from the general Glance community (and the OpenStack community too) for the same. In order for all this to be sorted out, I kindly request all the members to wait until after the k3 freeze, preferably until a time at which people would have a bit more time in their hand to look at their mailboxes for unexpected proposals of rotation. Once a decent proposal is set, we can announce the change-in-dynamics of the Glance program and get everyone interested familiar with it during the summit. Whereas, we should not block the currently active to-be-core members from doing great work. Hence, we should go ahead with adding them to the list. I hope that made sense. If you've specific concerns, I'm free to chat on IRC as well. (otherwise) Thoughts? Cheers, -Nikhil From: Alexander Tivelkov ativel...@mirantis.commailto:ativel...@mirantis.com Sent: Tuesday, February 24, 2015 7:26 AM To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage questions) Cc: krag...@gmail.commailto:krag...@gmail.com Subject: Re: [openstack-dev] [Glance] Core nominations. +1 on both proposals: rotation is definitely a step in right direction. -- Regards, Alexander Tivelkov On Tue, Feb 24, 2015 at 1:19 PM, Daniel P. Berrange berra...@redhat.commailto:berra...@redhat.com wrote: On Tue, Feb 24, 2015 at 10:47:18AM +0100, Flavio Percoco wrote: On 24/02/15 08:57 +0100, Flavio Percoco wrote: On 24/02/15 04:38 +, Nikhil Komawar wrote: Hi all, I would like to propose the following members to become part of the Glance core team: Ian Cordasco Louis Taylor Mike Fedosin Hemanth Makkapati Please, yes! Actually - I hope this doesn't come out harsh - I'd really like to stop adding new cores until we clean up our current glance-core list. This has *nothing* to do with the 4 proposals mentioned above, they ALL have been doing an AMAZING work. However, I really think we need to start cleaning up our core's list and this sounds like a good chance to make these changes. I'd like to propose the removal of the following people from Glance core: - Brian Lamar - Brian Waldon - Mark Washenberger - Arnaud Legendre - Iccha Sethi - Eoghan Glynn - Dan Prince - John Bresnahan None of the folks in the above list have neither provided reviews nor have they participated in Glance discussions, meetings or summit sessions. These are just signs that their focus have changed. While I appreciate their huge efforts in the past, I think it's time for us to move forward. It goes without saying that all of the folks above are more than welcome to join the glance-core team again if their focus goes back to Glance. Yep, rotating out inactive members is an
[openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*
Hi folks! According to the refactoring plan [1] we are going to release the 6.1 version of python-fuelclient which is going to contain recent changes but will keep backwards compatibility with what was before. However, the next major release will bring users the fresh CLI that won’t be compatible with the old one and the new, actually usable IRL API library that also will be different. The issue this message is about is the fact that there is a strong need to let both CLI and API users about those changes. At the moment I can see 3 ways of resolving it: 1. Show deprecation warning for commands and parameters which are going to be different. Log deprecation warnings for deprecated library methods. The problem with this approach is that the structure of both CLI and the library will be changed, so deprecation warning will be raised for mostly every command for the whole release cycle. That does not look very user friendly, because users will have to run all commands with --quiet for the whole release cycle to mute deprecation warnings. 2. Show the list o the deprecated stuff and planned changes on the first run. Then mute it. The disadvantage of this approach if that there is a need of storing the info about the first run to a file. However, it may be cleaned after the upgrade. 3. The same as #2 but publish the warning online. I personally prefer #2, but I’d like to get more opinions on this topic. References: 1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client - romcheg signature.asc Description: Message signed with OpenPGP using GPGMail __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev