Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility
-Original Message- From: Robert Collins [mailto:robe...@robertcollins.net] Sent: 29 December 2013 06:50 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility On 29 December 2013 04:21, Day, Phil philip@hp.com wrote: Hi Folks, I know it may seem odd to be arguing for slowing down a part of the review process, but I'd like to float the idea that there should be a minimum review period for patches that change existing functionality in a way that isn't backwards compatible. What is the minimum review period intended to accomplish? I mean: everyone that reviewed this *knew* it changed a default, and that guest OS's that did support ext3 but don't support ext4 would be broken. My point is that for some type of non-urgent change (i.e. those that change existing behaviour) there needs to be a longer period to make sure that more views and opinions can surface and be taken into account. Maybe all the reviewers in this case did realise the full impact of this change, but that's still not the same thing as getting a wide range of input. This is a change which has some significant impact, and there was no prior discussion as far as I know in the form of a BP or thread in the mailing list. There was also no real urgency in getting the change merged. Would you like to have seen a different judgement call - e.g. 'Because this is a backward breaking change, it has to go through one release of deprecation warning, and *then* can be made' ? Yep, I think that would be appropriate in this case. There is an impact beyond just the GuestOS support that occurred to me since, but I don't want to get this thread bogged down in this specific change so I'll start a new thread for that. My point is that where changes are proposed that affect the behaviour of the system, and especially where they are not urgent (i.e not high priority bug fixes) then we need to slow down the reviews and not assume that all possible views / impacts will surface in a few days. As I said, there seems to me to be something wrong with the priority around changes when non urgent but behaviour changes go though in a few days but we have bug fixes sitting with many +1's for over a month. One possible reason to want a different judgment call is that the logic about impacted OS's was wrong - I claimed (correctly) that every OS has support for ext4, but neglected to consider the 13 year lifespan of RHEL... https://access.redhat.com/site/support/policy/updates/errata/ shows that RHEL 3 and 4 are both still supported, and neither support ext4. So folk that are running apps in those legacy environments indeed cannot move. Yep - that's part of my concern for this specific change. Its an example of the kind of detail that I think would have emerged from a longer review cycle (at least I know I would have flagged it if I'd had the chance to ;-) Another possible reason is that we should have a strict no-exceptions-by- default approach to backwards incompatible changes, even when there are config settings to override them. Whatever the nub is - lets surface that and target it. Yep - I think we should have a very clear policy around how and when we make changes to default behaviour. That's really the point I'm trying to surface. Basically, I'm not sure what problem you're trying to solve - lets tease that out, and then talk about how to solve it. Backwards incompatible change landed might be the problem - but since every reviewer knew it, having a longer review period is clearly not connected to solving the problem :). That assumes that a longer review period wouldn't of allowed more reviewers to provide input - and I'm arguing the opposite. I also think that some clear guidelines might have led to the core reviewers holding this up for longer. As I said in my original post, the intent to get more input was clear in the reviews, but the period wasn't IMO long enough to make sure all the folks who may have something to contribute could. I'd rather see some established guidelines than have to be constantly on the lookout for changes every day or so and hoping to catch them in time. The specific change that got me thinking about this is https://review.openstack.org/#/c/63209/ which changes the default fs type from ext3 to ext4.I agree with the comments in the commit message that ext4 is a much better filesystem, and it probably does make sense to move to that as the new default at some point, however there are some old OS's that may still be in use that don't support ext4. By making this change to the Per above, these seem to be solely RHEL3 and RHEL4. And SLES. It also causes inconsistent behaviour in the system, as any existing default backing files will have ext3 in them, so a VM will now get
[openstack-dev] [nova] - Revert change of default ephemeral fs to ext4
Hi Folks, As highlighted in the thread minimal review period for functional changes I'd like to propose that change is https://review.openstack.org/#/c/63209/ is reverted because: - It causes inconsistent behaviour in the system, as any existing default backing files will have ext3 in them, so a VM will now get either ext3 or 3ext4 depending on whether the host they get created on already has a backing file of the required size or not. I don't think the existing design ever considered the default FS changing - maybe we shouldn't have files that include default as the file system type if it can change over time, and the name should always reflect the FS type. - It introduces a new a new requirement for GuestOS's to have to support ext4 in order to work with the default configuration. I think that's a significant enough change that it needs to be flagged, discussed, and planned. I'm about to go off line for a few days and won't have anything other than patchy e-mail access, otherwise I'd submit the change myself ;-) Phil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] Glance Mini Summit Details!
Hi folks, Late January, we will be having a mini summit focused on Glance and the Images Program. All OpenStack ATCs and associated technical product folks are welcome. Here are the details: Where: Hilton Washington Dulles Airport 13869 Park Center Road Herndon, Virginia 20171 (I do not yet know if there is an associated hotel block/discount code) When: January 27-28 2014, 8:30 AM - 5:00 PM What will we talk about: The agenda is still being formed and needs your input. See the outline of it at https://etherpad.openstack.org/p/glance-mini-summit-agenda and please suggest new items, volunteer to lead existing items, or indicate your interests. I have prefilled the agenda with some things that I know we might want to talk about but the situation is still very flexible. I hope to see you there! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
Hi, Following all of the discussion I have done the following: 1. Update the wiki with all of the details - https://wiki.openstack.org/wiki/Nova_VM_Diagnostics 2. Renamed the BP. It is now called v3-diagnostics https://blueprints.launchpad.net/openstack/?searchtext=v3-diagnostics 3. Posted a patch with libvirt support - https://review.openstack.org/#/c/61753/ The other drivers that support diagnostics will be updated in the coming days. I am not sure how tempest behaves with the V3 client but I am in the process of looking into that so that we can leverage this API with tempest. Do we also want the same support in V2? I think that it could be very helpful with the spurious test failures that we have. Thanks and a happy new year to all Gary On 12/19/13 6:21 PM, Vladik Romanovsky vladik.romanov...@enovance.com wrote: Ah, I think I've responded too fast, sorry. meter-list provides a list of various measurements that are being done per resource. sample-list provides a list of samples per every meter: ceilometer sample-list --meter cpu_util -q resource_id=vm_uuid These samples can be aggregated over a period of time per every meter and resource: ceilometer statistics -m cpu_util -q 'timestampSTART;timestamp=END;resource_id=vm_uuid' --period 3600 Vladik - Original Message - From: Daniel P. Berrange berra...@redhat.com To: Vladik Romanovsky vladik.romanov...@enovance.com Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, John Garbutt j...@johngarbutt.com Sent: Thursday, 19 December, 2013 10:37:27 AM Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal On Thu, Dec 19, 2013 at 03:47:30PM +0100, Vladik Romanovsky wrote: I think it was: ceilometer sample-list -m cpu_util -q 'resource_id=vm_uuid' Hmm, a standard devstack deployment of ceilometer doesn't seem to record any performance stats at all - just shows me the static configuration parameters :-( ceilometer meter-list -q 'resource_id=296b22c6-2a4d-4a8d-a7cd-2d73339f9c70' +-+---+--+--- ---+--+-- + | Name| Type | Unit | Resource ID | | User ID | Project ID | | +-+---+--+--- ---+--+-- + | disk.ephemeral.size | gauge | GB | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | | disk.root.size | gauge | GB | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | | instance| gauge | instance | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | | instance:m1.small | gauge | instance | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | | memory | gauge | MB | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | | vcpus | gauge | vcpu | | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 | | ec26984024c1438e8e2f93dc6a8c5ad0 | +-+---+--+--- ---+--+-- + If the admin user can't rely on ceilometer guaranteeing availability of the performance stats at all, then I think having an API in nova to report them is in fact justifiable. In fact it is probably justifiable no matter what as a fallback way to check that VMs are doing in the fact of failure of ceilometer / part of the cloud infrastructure. Daniel -- |: https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1% 2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq 8%3D%0Am=rSOjPJG6y%2F7%2B6l5u7ekbOpyWGQbpEaEZGcEqj8pnDJk%3D%0As=6555259 3e486953ee40218f87feced256047c7277195f1c4e44e44fa847210a4 -o- https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/d berrange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2B fDtysg45MkPhCZFxPEq8%3D%0Am=rSOjPJG6y%2F7%2B6l5u7ekbOpyWGQbpEaEZGcEqj8pn DJk%3D%0As=5a2cc10d6d1df7a65129d7b3184e7280c0e2ad47c969e16bdba70a66d3b34 905 :| |: https://urldefense.proofpoint.com/v1/url?u=http://libvirt.org/k=oIvRg1%2 BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8 %3D%0Am=rSOjPJG6y%2F7%2B6l5u7ekbOpyWGQbpEaEZGcEqj8pnDJk%3D%0As=64423471 28d8c5384877cb4e356baa489330dd532f9cdf764bfb6d5fd65ce984 -o- https://urldefense.proofpoint.com/v1/url?u=http://virt-manager.org/k=oIv Rg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZF
Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility
On 12/29/2013 03:53 PM, Clint Byrum wrote: Excerpts from Andreas Jaeger's message of 2013-12-28 23:05:45 -0800: On 12/29/2013 07:50 AM, Robert Collins wrote: One possible reason to want a different judgment call is that the logic about impacted OS's was wrong - I claimed (correctly) that every OS has support for ext4, but neglected to consider the 13 year lifespan of RHEL... https://access.redhat.com/site/support/policy/updates/errata/ shows that RHEL 3 and 4 are both still supported, and neither support ext4. So folk that are running apps in those legacy environments indeed cannot move. SUSE Linux Enterprise Server 11 comes with ext3 as default as well - and does not include ext4 support, so this really a bad change for SLES, Perhaps people should run SP3? https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP3/ Claims that ext4 write support is included, it just has to be toggled to on. Writing to it is not officially supported by SUSE with SP3 (see section 14.4.2). So, yes, it works but you can't ask for support... It is also worth noting that this is just a convenience filesystem. You can mkfs whatever you want on top of the block device. I wasn't aware of this - still it's a difference here where you cannot just use the device... Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility
On 12/28/2013 11:14 AM, Tim Bell wrote: I think there is a need for an incompatible change review process which includes more of the community than just those performing the code reviews. This kind of change can cause a lot of disruption for those of us running clouds so it is great to see that you are looking for more input. In the past, it has been proposed to also highlight incompatible changes to the openstack-operators list which is likely to reach those of us who will be most affected by the change. A similar process for API changes could also be applied to reach out for those who use OpenStack clouds. The change can then be reviewed as to how to minimise the impact (if significant) along with getting a larger group of people involved in understanding the merits of the change compared to the risks/effort for those running clouds in production. +1 Posting proposed incompatible changes to the operators list would be good, along with a message once a change is committed. Perhaps this could even be done automatically via the DocImpact tag. It would also be good to create the icehouse release notes and update them in real time. -David Are there any other proposals for how to handle incompatible changes ? Tim From: Day, Phil [mailto:philip@hp.com] Sent: 28 December 2013 16:21 To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org) Subject: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility Hi Folks, I know it may seem odd to be arguing for slowing down a part of the review process, but I'd like to float the idea that there should be a minimum review period for patches that change existing functionality in a way that isn't backwards compatible. The specific change that got me thinking about this is https://review.openstack.org/#/c/63209/ which changes the default fs type from ext3 to ext4.I agree with the comments in the commit message that ext4 is a much better filesystem, and it probably does make sense to move to that as the new default at some point, however there are some old OS's that may still be in use that don't support ext4. By making this change to the default without any significant notification period this change has the potential to brake existing images and snapshots. It was already possible to use ext4 via existing configuration values, so there was no urgency to this change (and no urgency implied in the commit messages, which is neither a bug or blueprint). I'm not trying to pick out the folks involved in this change in particular, it just happened to serve as a good and convenient example of something that I think we need to be more aware of and think about having some specific policy around. On the plus side the reviewers did say they would wait 24 hours to see if anyone objected, and the actual review went over 4 days - but I'd suggest that is still far too quick even in a non-holiday period for something which is low priority (the functionality could already be achieved via existing configuration options) and which is a change in default behaviour. (In the period around a major holiday there probable needs to be an even longer wait). I know there are those that don't want to see blueprints for every minor functional change to the system, but maybe this is a case where a blueprint being proposed and reviewed may have caught the impact of the change.With a number of people now using a continual deployment approach any chan ge in default behaviour needs to be considered not just for the benefits it brings but what it might break. The advantage we have as a community is that there are lot of different perspectives that can be brought to bear on the impact of functional changes, but we equally have to make sure there is sufficient time for those perspectives to emerge. Somehow it feels that we're getting the priorities on reviews wrong when a low priority changes like this which can go through in a matter of days, when there are bug fixes such as https://review.openstack.org/#/c/57708/ which have been sitting for over a month with a number of +1's which don't seem to be making any progress. Cheers, Phil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility
On 29 December 2013 21:06, Day, Phil philip@hp.com wrote: What is the minimum review period intended to accomplish? I mean: everyone that reviewed this *knew* it changed a default, and that guest OS's that did support ext3 but don't support ext4 would be broken. My point is that for some type of non-urgent change (i.e. those that change existing behaviour) there needs to be a longer period to make sure that more views and opinions can surface and be taken into account. Maybe all the reviewers in this case did realise the full impact of this change, but that's still not the same thing as getting a wide range of input. This is a change which has some significant impact, and there was no prior discussion as far as I know in the form of a BP or thread in the mailing list. There was also no real urgency in getting the change merged. I disagree that 'longer period' implies 'more views and opinions'. From the nova open reviews stats: 3rd quartile wait time: 17 days, 7 hours, 14 minutes 25% of *all* open nova reviews have had no review at all in 17 days. 3rd quartile wait time: 23 days, 10 hours, 44 minutes 25% of all open nova reviews have had no -1 or -2 in 23 days. I'm not debating the merits of more views and opinions - Sean has pointed out already that automation is better than us having to guess at when things will or won't work. But even if you accept that more views and opinions will help, there are over 100 reviews up with *no* such opinions added already. Lets say that something like the patch that triggered this went up for review, and that we established a one month mininum review period for such patches. There's a 25% chance it would hit 3 weeks with no input at all. The *effective* time then that a one month minimum period would set for it would be a week. Once the volume of reviews needed exceeds a single reviewers capacity, by definition some reviewers will not see some patches *at all*. At that point it doesn't matter how long a patch waits, it will never hit the front of the line for some reviewers unless we have super strict - and careful - ordering on who reviews what. Which we don't have, and can't get trivially. But even if we do: - time is not a good proxy for attention, care, detail or pretty much any other metric when operating a scaled out human process. What would make a good proxy metric for 'more views and opinions'? I think asking for more cores to +2 such changes would do it. E.g. ask for 4 +2's for backward incompatible changes unless they've gone through the a release cycle of being deprecated/warned. Would you like to have seen a different judgement call - e.g. 'Because this is a backward breaking change, it has to go through one release of deprecation warning, and *then* can be made' ? Yep, I think that would be appropriate in this case. There is an impact beyond just the GuestOS support that occurred to me since, but I don't want to get this thread bogged down in this specific change so I'll start a new thread for that. My point is that where changes are proposed that affect the behaviour of the system, and especially where they are not urgent (i.e not high priority bug fixes) then we need to slow down the reviews and not assume that all possible views / impacts will surface in a few days. Again, I really disagree with 'need to slow down'. We need to achieve something *different*. As I said, there seems to me to be something wrong with the priority around changes when non urgent but behaviour changes go though in a few days but we have bug fixes sitting with many +1's for over a month. Another possible reason is that we should have a strict no-exceptions-by- default approach to backwards incompatible changes, even when there are config settings to override them. Whatever the nub is - lets surface that and target it. Yep - I think we should have a very clear policy around how and when we make changes to default behaviour. That's really the point I'm trying to surface. Cool. So - lets focus on that, and get it added to the reviewer checklist wiki page once there is consensus. Basically, I'm not sure what problem you're trying to solve - lets tease that out, and then talk about how to solve it. Backwards incompatible change landed might be the problem - but since every reviewer knew it, having a longer review period is clearly not connected to solving the problem :). That assumes that a longer review period wouldn't of allowed more reviewers to provide input - and I'm arguing the opposite. I also think that some clear guidelines might have led to the core reviewers holding this up for longer. As I said in my original post, the intent to get more input was clear in the reviews, but the period wasn't IMO long enough to make sure all the folks who may have something to contribute could. I'd rather see some established guidelines than have to be constantly on the lookout for
Re: [openstack-dev] [Ironic]Communication between Nova and Ironic
Leslie, This discussion is very interesting indeed :) The current approach to auto-scale is that it is decided upon by Heat service. Heat templates have special mechanisms to trigger auto-scaling of resources when certain conditions are met. In combination with Ironic, it has powerful potential for OpenStack-on-OpenStack use case you're describing. Baiscally, Heat has all orchestration functions in OpenStack. I see it as a natural place for other interesting things like auto-migration of workloads and so on. -- Best regards, Oleg Gelbukh On Sun, Dec 29, 2013 at 8:03 AM, LeslieWang wqyu...@hotmail.com wrote: Hi Client, Current ironic call is for add/delete baremetl server, not with auto-scale. As we discussed in another thread. What I'm thinking is related with auto-scale baremetal server. In my mind, the logic can be 1. Nova scheduler determines scale up one baremetal server. 2. Nova scheduler notify ironic (or other API?) to power up the server. 3. if ironic (or other service?) returns success, nova scheduler can call ironic to add the baremetal server into cluster. Of course, this is not a sole way for auto-scale. As you specified in another thread, auto-scale can be triggered from under-cloud or other monitoring service. Just try to bring up the interesting discussion. :-) Best Regards Leslie From: cl...@fewbar.com To: openstack-dev@lists.openstack.org Date: Sat, 28 Dec 2013 13:40:08 -0800 Subject: Re: [openstack-dev] [Ironic]Communication between Nova and Ironic Excerpts from LeslieWang's message of 2013-12-24 03:01:51 -0800: Hi Oleg, Thanks for your promptly reply and detail explanation. Merry Christmas and wish you have a happy new year! At the same time, I think we can discuss more on Ironic is for backend driver for nova. I'm new in ironic. Per my understanding, the purpose of bare metal as a backend driver is to solve the problem that some appliance systems can not be virtualized, but operator still wants same cloud management system to manage these systems. With the help of ironic, operator can achieve the goal, and use one openstack to manage these systems as VMs, create, delete, deploy image etc. this is one typical use case. In addition, actually I'm thinking another interesting use case. Currently openstack requires ops to pre-install all servers. TripleO try to solve this problem and bootstrap openstack using openstack. However, what is missing here is dynamic power on VM/switches/storage only. Imagine initially lab only had one all-in-one openstack controller. The whole work flow can be: 1. Users request one VM or baremetal server through portal. 2. Horizon sends request to nova-scheduler 3. Nova-scheduler finds no server, then invoke ironic api to power on one through IPMI, and install either hyper visor or appliance directly. 4. If it need create VM, Nova-scheduler will find one compute node, and send message for further processing. Based on this use case, I'm thinking whether it makes sense to embed this ironic invokation logic in nova-scheduler, or another approach is as overall orchestration manager, TripleO project has a TripleO-scheduler to firstly intercept the message, invoke ironic api, then heat api which calls nova api, neutron api, storage api. In this case, TripleO only powers on baremetal server running VM, nova is responsible to power on baremetal server running appliance system. Sounds like latter one is a good solution, however the prior one also works. So can you please comment on it? Thanks! I think this basically already works the way you desire. The scheduler _does_ decide to call ironic, it just does so through nova-compute RPC calls. That is important, as this allows the scheduler to hand-off the entire work-flow of provisioning a machine to nova-compute in the exact same way as is done for regular cloud workloads. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa][FWaaS][Neutron]Firewall service disabled on gate
Hi, I'm trying to push a firewall api test [1] and I see it cannot run on the current gate. I was FWaaS is disabled since it broke the gate. Does anyone knows if this is still an issue? If so - how do we overcome this? I would like to do some work on this service (scenarios) and don't want to waste time if this is something that cannot be done right now Thank you Yair [1] https://review.openstack.org/64362 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa][FWaaS][Neutron]Firewall service disabled on gate
I reckon the decision of keeping neutron's firewall API out of gate tests was reasonable for the Havana release. I might be argued the other 'experimental' service, VPN, is already enabled on the gate, but that did not happen before proving the feature was reliable enough to not cause gate breakage. If we can confidently say the same for the firewall extension now, I would agree on enabling it on gate tests as well. Salvatore On 29 December 2013 22:22, Yair Fried yfr...@redhat.com wrote: Hi, I'm trying to push a firewall api test [1] and I see it cannot run on the current gate. I was FWaaS is disabled since it broke the gate. Does anyone knows if this is still an issue? If so - how do we overcome this? I would like to do some work on this service (scenarios) and don't want to waste time if this is something that cannot be done right now Thank you Yair [1] https://review.openstack.org/64362 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tempest][qa] Adding tags to commit messages
On 2013-12-29 15:09:24 -0500 (-0500), David Kranz wrote: [...] Looking at the docs I see the warning that you can't put this in the search field so I tried putting it directly in the url like the other parameters but it was ignored. Is there indeed a way to search for only patches that contain changes to files that match a regexp? As the documentation says, Currently this operator is only available on a watched project... https://review.openstack.org/Documentation/user-search.html#_search_operators The implication being it's only implemented for filtering project watches--the Only if field you see on the Watched Projects setting page. https://review.openstack.org/#/settings/projects -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] tagged commit messages
On Dec 29, 2013, at 2:05 PM, Michael Still mi...@stillhq.com wrote: On Mon, Dec 30, 2013 at 8:12 AM, John Dickinson m...@not.mn wrote: I've seen several disconnected messages about tags in commit messages. I've seen what is possible with the DocImpact tag, and I'd like to have some more flexible tagging things too. I'd like to use tags for things like keeping track of config defaults changing, specific ongoing feature work, and tracking changes come release time. I suspect I'm the last person to have touched this code, and I think expanding tags is a good idea. However, I'm not sure if its the best mechanism possible -- if a reviewer requires a tag to be added or changed, it currently requires a git review round trip for the developer or their proxy. Is that too onerous if tags become much more common? I definitely think some more formal way of tracking that a given patch needs to be covered by the release notes is a good idea. There are currently two hooks that I can see in our gerrit config: - patchset-created - change-merged I suspect some tags should be executed at patchset-merged? For example a change to flag defaults might cause a notification to be sent to interested operators? Perhaps step one is to work out what tags we think are useful and at what time they should execute? I think this is exactly what I don't want. I don't want a set of predefined tags. We've got that today with DocImpact and SecurityImpact. What I want, for very practical examples in Swift, are tags for config changes so deployers can notice, tags for things with upgrade procedures, tags for dependency changes, tags for this is a new feature, all in addition to the existing DocImpact and SecurityImpact tag. In other words, just like impacted teams get alerted for changes that impact docs, I want patches that impact Swift proxy-server configs to be tracked (and bin scripts, and dependencies, and ring semantics, and etc). I think you're absolutely right that some things should happen at patchset-created time and others at change-merged time. Like you I'm also concerned that adding a new tag may be too heavyweight if it requires a code push/review/gate cycle. Here's an alternative: 1) Define a very lightweight rule for tagging commits (eg: one line, starts with tags:, is comma-separated) 2) Write an external script to parse the git logs and look for tags. It normalizes tags (eg lowercase+remove spaces), and allow simple searches (eg show all commits that are tagged 'configchange'). That wouldn't require repo changes to add a tag, gives contributors massive flexibility in tagging, doesn't add new dependencies to code repos, and is lightweight enough to be flexible over time. Hmmm...actually I like this idea. I may throw together a simple script to do this and propose using it for Swift. Thanks Michael! --John Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility
Hi Sean, I'm not convinced the comparison to my clean shut down change is valid here. For sure that proved that beyond a certain point (in that case months) there is no additional value in extending the review period, and no amount of review will catch all problems, but that's not the same as saying that there is no value in a minimum period. In this particular case if the review has been open for say three weeks then imo the issue would have been caught, as I spotted it as soon as I saw the merge. As it wasn't and urgent bug fix I don't see a major gain from not waiting even if there wasn't a problem. I'm all for continually improving the gate tests, but in this case they would need to be testing against a system that had been running before the change, to test specifically that new vms got the new fs, so there would have needed to be a matching test added to grenade as part of the same commit. Not quite sure where the number of open changes comes in either, just because there are a lot of proposed changes doesn't to me mean we need to rush the non urgent ones, it mwans we maybe need better prioritisation. There are plenty of long lived buf fixes siting with many +1s Phil Sent from Samsung Mobile Original message From: Sean Dague s...@dague.net Date: To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility On 12/29/2013 03:06 AM, Day, Phil wrote: snip Basically, I'm not sure what problem you're trying to solve - lets tease that out, and then talk about how to solve it. Backwards incompatible change landed might be the problem - but since every reviewer knew it, having a longer review period is clearly not connected to solving the problem :). That assumes that a longer review period wouldn't of allowed more reviewers to provide input - and I'm arguing the opposite. I also think that some clear guidelines might have led to the core reviewers holding this up for longer. As I said in my original post, the intent to get more input was clear in the reviews, but the period wasn't IMO long enough to make sure all the folks who may have something to contribute could. I'd rather see some established guidelines than have to be constantly on the lookout for changes every day or so and hoping to catch them in time. Honestly, there are currently 397 open reviews in Nova. I am not convinced that waiting on this one would have come up with a better decision. I'll given an alternative point of view of the graceful shutdown patch, where we sat on that for months, had many iterations, landed it, it added 25 minutes to all the test runs (which had been hinted at sometime in month 2 of the review, but got lots in the mists of time), and we had to revert it. I'm not convinced more time brings more wisdom. We did take it to the list, and there were no objections. I did tell Robert to wait because I wanted to get those points of view. But they didn't show up. Because it was holidays could we have waited longer? Sure. I'll take a bad on that in feeling that Dec 19th wasn't really holidays yet because I was still working. :) But, honestly, given no negative feedback on the thread in question and no -1 on the review, the fact that folks like google skipped ext3 entirely, means this review was probably landing regardless. Every time we need to do a revert, we need to figure out how to catch it the next time. Humans be better is really not a solution. So this sounds like we need a guest compatibility test where we boot a ton of different guests on each commit and make sure they all work. I'd whole heartily support getting that in if someone wants to champion that. That's really going to be the only way we have a systematic way of knowing that we break SLES in the future. So the net, we're all human, and sometimes make mistakes. I don't think we're going to fix this with review policy changes, but we could with actual CI enhancements. -Sean -- Sean Dague Samsung Research America s...@dague.net / sean.da...@samsung.com http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] tagged commit messages
On Sun, Dec 29, 2013 at 7:51 PM, John Dickinson m...@not.mn wrote: On Dec 29, 2013, at 2:05 PM, Michael Still mi...@stillhq.com wrote: On Mon, Dec 30, 2013 at 8:12 AM, John Dickinson m...@not.mn wrote: I've seen several disconnected messages about tags in commit messages. I've seen what is possible with the DocImpact tag, and I'd like to have some more flexible tagging things too. I'd like to use tags for things like keeping track of config defaults changing, specific ongoing feature work, and tracking changes come release time. I suspect I'm the last person to have touched this code, and I think expanding tags is a good idea. However, I'm not sure if its the best mechanism possible -- if a reviewer requires a tag to be added or changed, it currently requires a git review round trip for the developer or their proxy. Is that too onerous if tags become much more common? I definitely think some more formal way of tracking that a given patch needs to be covered by the release notes is a good idea. There are currently two hooks that I can see in our gerrit config: - patchset-created - change-merged I suspect some tags should be executed at patchset-merged? For example a change to flag defaults might cause a notification to be sent to interested operators? Perhaps step one is to work out what tags we think are useful and at what time they should execute? I think this is exactly what I don't want. I don't want a set of predefined tags. We've got that today with DocImpact and SecurityImpact. What I want, for very practical examples in Swift, are tags for config changes so deployers can notice, tags for things with upgrade procedures, tags for dependency changes, tags for this is a new feature, all in addition to the existing DocImpact and SecurityImpact tag. In other words, just like impacted teams get alerted for changes that impact docs, I want patches that impact Swift proxy-server configs to be tracked (and bin scripts, and dependencies, and ring semantics, and etc). I think you're absolutely right that some things should happen at patchset-created time and others at change-merged time. Like you I'm also concerned that adding a new tag may be too heavyweight if it requires a code push/review/gate cycle. Here's an alternative: 1) Define a very lightweight rule for tagging commits (eg: one line, starts with tags:, is comma-separated) 2) Write an external script to parse the git logs and look for tags. It normalizes tags (eg lowercase+remove spaces), and allow simple searches (eg show all commits that are tagged 'configchange'). That wouldn't require repo changes to add a tag, gives contributors massive flexibility in tagging, doesn't add new dependencies to code repos, and is lightweight enough to be flexible over time. +1 Hmmm...actually I like this idea. I may throw together a simple script to do this and propose using it for Swift. Thanks Michael! --John Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] tagged commit messages
On Dec 29, 2013, at 5:24 PM, Michael Still mi...@stillhq.com wrote: On Mon, Dec 30, 2013 at 11:51 AM, John Dickinson m...@not.mn wrote: On Dec 29, 2013, at 2:05 PM, Michael Still mi...@stillhq.com wrote: [snip] Perhaps step one is to work out what tags we think are useful and at what time they should execute? I think this is exactly what I don't want. I don't want a set of predefined tags. [snip] Super aggressive trimming, because I want to dig into this one bit some more... I feel like anything that requires pro-active action from the target audience will fail. For example, in nova we've gone through long cycles with experimental features where we've asked deployers to turn on new features in labs and report problems before we turn it on by default. They of course don't. So... I feel there is value in a curated list of tags, even if we allow additional tags (a bit like launchpad). In fact, the idea of a DeployImpact tag for example really works for me. I'm very tempted to implement that one in notify_impact now. Yup, I understand and agree with where you are coming from. Let's discuss DeployImpact as an example. First, I like the idea of some set of curated tags (and you'll see why at the end of this email). Let's have a way that we can tag a commit as having a DeployImpact. Ok, what does that mean? In some manner of speaking, _every_ commit has a deployment impact. So maybe just things that affect upgrades? Is that changes? New features? Breaking changes only (sidebar: why would these sort of changes ever get merged anyway? moving on...)? My point is that a curated list of tags ends up being fairly generic to the point of not being too useful. Ok, we figured out the above questions (ie when to use DeployImpact and when to not use it). Now I'm a deployer and packager (actually not hypothetical, since my employer is both for Swift), so what do I do? Do I have to sign up for some sort of thing? Does this mean a gerrit code review cycle to some -infra project? That would be a pretty high barrier for getting access to that info. Or maybe the change-merged action for a DeployImpact tag simply sends an email to a new DeployImpact mailing list or puts a new row in a DB somewhere that is shown on some page ever time I load it? In that case, I've still got to sign up for a new mailing list (and remember to not filter it and get everyone in my company who does deployments to check it) or remember to check a particular webpage before I do a deploy. Maybe I'm thinking about this wrong way. Maybe the intended audience is the rest of the OpenStack dev community. In that case, sure, now I have a way to find DeployImpact commits. That's nice, but what does that get me? I already see all the patches in my email and on my gerrit dashboard. Being able to filter the commits is nice, but constraining that to an approved list of tags seems heavy-handed. So while I like the idea of a curated list of tags, in general, I don't think they lessen the burden for the intended audience (the intended audience being people not in the dev/contributor community but rather those deploying and using the code). That's why a tool that can parse git commit messages seems simple and flexible enough to meet the needs of deployers (eg run `git log commit | tagged-search deployimpact` before packaging) without requiring the overhead of managing a curated tag list via code repo changes (as DocImpact is today). All that being said, I'll poke some holes in my own idea. The problem with my idea is letting deployers know what tags they should actually search for. In this case, there probably should be some curated list of high-level tags that should be used across all OpenStack projects. In other words, if I use deploy-change on my patch and you use DeploymentImpact, then what does a packager/deployer search for? There should be some set of tags with guidelines for their usage on the wiki. I'd propose starting with ConfigDefaultChanged, DependencyChanged, and NewFeature. --John Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack][keystone] Is the user password too simple?
hi all: when create user, you can set user password. You can set password as a simple word 'a'. the password is too simple but not limit. if someone want to steal your password, it is so easily(such as exhaustion). I consider that it must be limited when set password, like this: 1. inlcude uppper and lower letters 2. include nums 3. include particular symbol,such as '_','' 4. the length8 administor can set the password rule. I want to provide a BP about this issue. can you give me some advice or ideas?? thanks! lizheming ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] django-bootstrap-form django 1.4 / 1.5
Hi, Currently, in the global-requirements.txt, we have: Django=1.4,1.6 django-bootstrap-form However, django-bootstrap-form fail in both Django 1.4 and Django 1.6. What's the way forward? Would it be possible that someone makes a patch for django-bootstrap-form, so that it could work with Django 1.4 1.5 (which would be the best way forward for me...)? Cheers, Thomas Goirand (zigo) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev